A GUIDE TO EVALUATING CRIME CONTROL OF PROGRAMS IN PUBLIC HOUSING Prepared for: U.S. Department of Housing and Urban Development, Office of Policy Development and Research Prepared by: KRA Corporation April 1997 ------------------------------------------------------ The contents of this report are the views of the contractor and do not necessarily reflect the views or policies of the Department of Housing and Urban Development or the U.S. Government. ------------------------------------------------------ FOREWORD As part of our criminal justice research agenda, the Office of Policy Development and Research (PD&R) originally contracted with the KRA Corporation to review evaluation research on violence prevention initiatives. The objective of the review was to identify interpersonal violence prevention strategies that had been demonstrated to be effective for use in urban neighborhoods. We had hoped to produce a resource book from which public housing authorities and other social service agencies could glean helpful prevention information in their struggle to conquer the violent crime that plagues so many of our cities. However, KRA found few proven violence prevention strategies. There was no shortage of prevention programs, but rather a marked shortage of evaluations. Therefore, in consultation with KRA, PD&R decided to produce a manual on how to conduct program evaluations of crime control programs with special emphasis on violence prevention efforts in public housing. Thus, this volume, A Guide to Evaluating Crime Control Programs in Public Housing, was written. We hope that you find the Guide helpful. Michael A. Stegman Assistant Secretary for Policy Development and Research ------------------------------------------------------ TABLE OF CONTENTS Introduction Chapter 1: Why Should You Evaluate Your Program? Chapter 2: What Is Evaluation? Chapter 3: Who Should Conduct Your Evaluation? Chapter 4: How Do You Prepare for an Evaluation? Chapter 5: Developing an Evaluation Plan Chapter 6: How Do You Get the Information You Need for Your Evaluation? Chapter 7: Analyzing Evaluation Information Chapter 8: Reporting Your Findings Glossary Resources ------------------------------------------------------ INTRODUCTION The Office of Policy Development and Research in the Department of Housing and Urban Development (HUD) contracted with KRA Corporation (KRA) to review evaluation research on violence prevention initiatives and identify interpersonal violence prevention strategies that are effective for use in public housing. Successful prevention initiatives to be identified were to focus on ways to reduce accessibility of firearms, drug trafficking, conflicts between youth as individuals and as members of gangs, and abuse of alcohol and other drugs. This study was designed to meet the pressing need for information to shape the efforts of those seeking to prevent or reduce the incidence of violence in public housing and similar environments. The original intent was to create an inventory of violence prevention initiatives that had been evaluated and shown to be effective in: o Lowering the incidence of violent criminal behaviors in public housing and similar environments. o Displacing criminal behavior known to be correlated with a high incidence of violence. The inventory was to contribute to the creation of a resource document that could be used to plan local initiatives for reducing the incidence of violent criminal behavior in public housing and similar environments and reducing other criminal behavior associated with high rates of violence. Very little evaluation research was identified upon completion of the inventory. HUD therefore changed the project to better meet the needs of Public Housing Agency (PHA) directors and others in the housing field. As an alternative to a document to assist in implementing violence prevention initiatives, HUD and KRA created this how-to manual on evaluating violence prevention efforts. This manual is designed to do the following: o Familiarize PHA directors and others in public housing with evaluation issues and procedures so they can participate in the process and effectively monitor the work of internal or external evaluators. o Provide PHA directors and others in public housing with skills for thinking through the design of a violence prevention project, even if an evaluation is not planned. ------------------------------------------------------ CHAPTER 1 -- WHY SHOULD YOU EVALUATE YOUR PROGRAM? As a public housing administrator or key staff member in a public housing environment, you may have already asked yourself the question, "Why do I need to evaluate my program?" Most likely, your first responsibility has been to provide safe housing for residents. You may wonder why you need to evaluate, or measure the performance of, a program that seems to be working. The first chapter of this manual presents reasons why you might want to evaluate an existing program or consider an evaluation before planning your next program. Have you wanted answers to the following questions about any of the programs in your public housing developments? o Are our current programs meeting residents' needs? o How can I measure the performance of our programs? o How can I be sure a specific program is working? o How can I find out if the violence prevention services we provide are leading to changes in the safety of residents? o What works best to reduce violence and crime for our public housing residents? This manual explains how program evaluation can answer each of these questions. Benefits of evaluation Evaluation measures performance. As a public housing administrator, you want to measure program performance. Evaluation provides tangible evidence that you are putting resources into programs that benefit residents. More importantly, it helps you direct those scarce resources to support programs that work. Evaluation is also just as useful to determine what doesn't work in a program and provides information you can use to improve your current efforts. Evaluation demonstrates program benefits to funding sources and to the community. If you have a violence prevention program that works, you should share this success with funding sources, residents, and the community at large. The public or private agency or foundation that funded this program will want to know that it is supporting a successful project. Additionally, this information can be used to attract other potential funders. Agencies also often require programs to measure performance or provide information on program results, service quality, and customer (resident) satisfaction. In addition, evaluation can provide useful information on the impact of the violence prevention program to a variety of audiences who are in the position to support your efforts, such as State and local officials, local law enforcement agencies, neighborhood associations, and community leaders. Evaluation will give you the evidence you need to obtain support for your program. You can use results to solicit funds from other funding sources, to support a request for additional funds to expand the program, or to justify offering the same program in another location. Evaluation can help improve your program's effectiveness. Another benefit of conducting a program evaluation is that the findings will help you improve your program. You will be able to say with confidence that changes or improvements in the program are directly related to your program's evaluated intervention(s). Evaluations create an opportunity to share information about what works with similar agencies. If you have a program that has been shown through evaluation to be effective, you can share this valuable information with other public housing agencies. You will have documented evaluation findings that show that the program design succeeded. Other agencies of similar size, in similar environments, will be able to replicate your program knowing that it works. ------------------------------------------------------ USING EVALUATION RESULTS: ONE EXAMPLE A funder provided $40,000 in seed money to implement a midnight basketball program for boys and girls ages 16 to 20 residing in public housing. Surveys were administered to the youths both before and after the program was implemented. The survey findings showed that prior to the program: o 92 percent of the youths surveyed reported they expected to get into some kind of trouble in the next 3 months. o 66 percent of the youths thought they would be victims of violent acts during that same period. Following implementation of the basketball program: o 20 percent of the youths surveyed stated they expected to get into some kind of trouble. o Only 5 percent of the youths expected to be crime victims. Evaluation of the midnight basketball program revealed a 78 percent reduction in the juvenile offender crime rate among youths 16 to 20 years old in the precinct where the public housing development is located. The primary reason the youths gave for their survey responses was that having a midnight basketball program gave them something positive to do. Community residents were also surveyed and responded that they felt both their community and their children were safer because of the midnight basketball program. The summary findings above could be used to demonstrate to residents and the community at large that this program was successful in preventing and reducing violence. The midnight basketball program administrators could also present the findings to the city council to justify a request for continued funding. ------------------------------------------------------ An evaluation can provide you with information on the process of implementing the program as well as the outcomes. Other public housing agencies will be able to benefit from lessons learned during your program implementation phase and the subsequent evaluation. COMMON CONCERNS ABOUT EVALUATION Concern #1. Evaluation diverts resources from the program itself. Although an evaluation will cost something, it does not have to divert resources from other program activities. Consider developing a separate budget to support an evaluation. For example, when creating your initial program budget, you could divide the money into two distinct components -- one for program activities and one for the evaluation. If your agency has funds set aside for research purposes, your evaluation could be funded from this pool of money. Or you could obtain financial support from a source other than the one funding your programmatic activities. Concern #2. Evaluation increases the burden on program staff. Often program staff are responsible for collecting evaluation information because they are the most familiar with program participants and have the most contact with them. Despite this potential for increased burden, staff can benefit greatly from evaluation because it provides information that can help the staff improve their work with participants, learn more about program and participant needs, and validate their successes. Also, it's possible to decrease the burden on staff somewhat by incorporating evaluation into ongoing program activities. More information on how to do this appears in chapter 5. Concern #3. Evaluation is too complicated. This is a common misconception. An evaluation is intended to answer basic questions about programs. Usually, evaluation questions include the following: o How well did the program perform based on its stated performance indicators? o Did the program meet its stated objectives? (For example, did the program prevent violence for the target population it served?) o Did change occur, and what were the changes? Concern #4. Evaluation may produce negative results and hurt the program. An evaluation may show that the program has not worked, but it is more likely to show that the program alone is not responsible for any changes that may have occurred. The goal of most evaluations is to determine whether changes can be attributed to a particular component of the program. Concern #5. Evaluation is another form of monitoring. Evaluation is not the same as monitoring. However, often the information examined to monitor program operations is similar or can overlap with the information needed to conduct an evaluation. Monitoring looks at whether the program elements are being provided. For example, are staff doing what is called for by the program plan? Are they spending their time on appropriate program activities? Evaluation differs from monitoring in that it attempts to relate the desired outcomes to the program activities to see whether these activities produced the outcomes. TIPS FOR CONDUCTING A SUCCESSFUL EVALUATION Invest in planning. Before you begin, develop a plan that details what you are planning to evaluate, the timeframe for conducting the evaluation, who will do the evaluation, what resources are available, and what you plan to do with the findings. Ideally, the evaluation will be planned at the same time that the program gets underway. For more information on developing an evaluation plan, see chapter 5. Set aside adequate resources for the evaluation. Resources include not only funds to support the evaluation, but also staff time to complete evaluation activities. Begin the evaluation during the initial stages of program implementation. It is always best to begin the evaluation during the initial stages of program implementation. Having information about the program from the very beginning enables you to make modifications if you determine that any aspect of the program is not working. Obviously, it is far better to make changes in a planned program early in the implementation phase than to carry out a program knowing that some aspect of it is not working well. For example, say that as part of a violence prevention effort, your PHA has established a peer mediation program for youths ages 8 to 15 residing in two public housing developments. This 3-month program operates daily from 9 a.m. to noon in the local recreation center. During the first week of operation you determine that attendance is only 30 percent of what you expected. Most of the targeted youths have opted to go to the open swim at the recreation center or are working mornings with the summer youth employment program. In this hypothetical situation, you would need to make a decision about how to modify the peer mediation program to ensure participation by its intended target population. Let's say you decide to change the hours of the peer mediation class to late afternoon and to offer an incentive to youths who complete the program. Flyers are distributed announcing the time change and weekly cookouts for all participants who attend daily. This is an example of changing a program design in its early stages, when it becomes apparent that participation is likely to be much lower than expected. If this change in program design had not been made early in the implementation phase, the program may have experienced low levels of participation and may not have produced the desired outcomes. Promote resident and staff participation in the evaluation. Participation of both the residents and housing authority staff in the evaluation is critical to its success. Residents and staff need to feel that both the evaluation and the findings will benefit everyone involved. They should fully understand the importance of the evaluation process and how the evaluation findings can help them, their community, and the program. One way to encourage resident/staff ownership of and responsibility for the evaluation is to involve them in the evaluation. This can be accomplished during the period when you are collecting information about the program and participants (commonly referred to as the data collection phase of the program evaluation). Often staff are the best data collectors because they are the ones most actively involved in day-to-day program operations. Be realistic about the burden of an evaluation. Evaluations require work. Even if you hire an outside evaluator to conduct one, you and your staff will have to spend time arranging for the evaluator to access data needed to carry out the study. Depending on the type of data collection planned for the evaluation, there may be the need to interview staff, review records, or distribute and collect survey questionnaires from residents and program participants. This is another reason that it is important to explain to staff and residents why you need an evaluation and how it will benefit them. Ensure confidentiality of resident responses. Obtaining personal data is always sensitive; however, getting people to discuss their thoughts and/or involvement in illegal or violent activities is particularly touchy. When promising confidentiality of responses to the survey instruments, it is important to adhere to this commitment through all evaluation activities. Public housing residents who participate in any evaluation should be informed that they are taking part in one and that they have the right to refuse to participate without jeopardizing their participation in the program. Have participants sign consent forms containing assurances that they will not be asked to leave their housing units based on information they supply for the evaluation and informing them that the evaluation is designed so that individual responses cannot be linked back to a specific participant. More information on informed consent and a sample consent form appear in chapter 6. Consider cultural issues. You will want to ensure that the evaluation is relevant to and respectful of the cultural backgrounds and individuality of program participants. Chapter 6 also contains more discussion of cultural relevance. ----------------------------------------------------- CHAPTER 2 -- WHAT IS EVALUATION? You and your staff have probably asked yourselves questions such as, Are our residents safe in their homes? Are crime rates decreasing? Are residents satisfied with new security efforts? Evaluation addresses these same questions using systematic methods to ensure that the answers are supported by evidence. Program evaluation is simply a systematic method for collecting, analyzing, and using information to answer basic questions about your violence prevention program. WHAT CAN AN EVALUATION TELL YOU? An evaluation can tell you whether you have been successful in meeting two primary program objectives: Have you been successful in implementing your program? Are you implementing the activities that you initially planned? Are you reaching the intended target population? Are you reaching the intended number of residents? Are you developing planned collaborative relationships? These questions address your implementation objectives -- what you plan to do, how you plan to do it, and who it is you want to reach. Have you achieved the results/outcomes you expected? Are you seeing a reduction in crime in your complex? Do residents feel safer? Are residents exhibiting the expected changes in knowledge, attitudes, behaviors, awareness, or self-esteem? Are you seeing the expected changes in the community? These questions address your outcome objectives, which are your expectations about how your program will affect the incidence of acts of violence in your housing development and affect residents' satisfaction, knowledge, behavior, and attitudes. An evaluation should answer both types of these objectives. You may have successfully implemented your program, but if you do not have information about program outcomes, you will not know if your program is resulting in decreased crime and violence. Similarly, the program may be successful in increasing security, but you will not be able to identify how or why these changes occurred if you do not have information about how the program was implemented. To answer these questions, the evaluation plan must include performance measures. Performance measures may be based on quantitative or qualitative data. Quantitative data are easily expressed in numerical terms, such as the number of robberies per month or the number of residents participating in a specific violence prevention activity. On the other hand, qualitative data usually describe attitudes, such as residents' feelings of safety in their homes and their beliefs about how serious the problem of teenage drug use is. Both types of performance measures are useful in evaluating your violence prevention program. WHAT IS INVOLVED IN CONDUCTING AN EVALUATION? The term "systematic" in the definition of evaluation indicates that evaluation requires a structured and consistent method of collecting and analyzing information about your program. This means that an evaluation must be carefully planned and carried out. You can ensure that your evaluation is conducted in a systematic manner by following several basic steps. Step 1: Assemble an evaluation team. Planning and implementing your evaluation should be a team effort. Even if you hire an outside consultant to help, you and your staff need to remain full partners in the evaluation effort. Chapter 3 discusses various evaluation teams and the roles that different team members may play in an evaluation. Step 2: Prepare for the evaluation. This includes deciding what to evaluate, building a model of your program, and stating your objectives in measurable terms. The more attention you give to planning the evaluation, the more effective it will be. Chapter 4 will help you prepare for your evaluation. Step 3: Develop an evaluation plan. An evaluation plan is a blueprint or a map of an evaluation. It details the design and methods that will be used to conduct the evaluation and analyze the findings. Information on what to include in a plan is provided in chapter 5. Step 4: Collect evaluation information. Once an evaluation plan is completed, you are ready to begin collecting information. This will require selecting and/or developing information collection procedures and instruments. This process is discussed in chapter 6. Step 5: Analyze your evaluation information. After evaluation information is collected, it must be organized in a way that allows you to analyze it. Analysis should be conducted at various times during the course of the evaluation to allow you and your staff to obtain ongoing feedback about the program. This will either validate what you are doing or identify areas where changes may be needed. Chapter 7 discusses the analysis process. Step 6: Prepare the evaluation report. The evaluation report should be a comprehensive document that describes the program in addition to providing the results of the implementation and outcome analysis. The report also should include an interpretation of the results for understanding program effectiveness. Chapter 8 is designed to assist you in preparing an evaluation report. WHAT WILL AN EVALUATION COST? Although a dollar amount cannot be specified here, it is possible to describe the kinds of information you can obtain from evaluations at different cost levels. Generally, the more data you collect (that is, collecting data at different points in time or from a variety of sources), the more costly the evaluation; however, the usefulness of the evaluation also increases. Collecting data from one point in time and from a limited number of sources or using data that are already assembled for other purposes will minimize your evaluation costs. Examples of performance measures that this level of evaluation address include the following: o The number and/or characteristics of residents participating in a violence prevention activity. o Current resident satisfaction with security or a particular violence prevention activity. o The number of violent acts occurring in the housing development at a specific point in time. While the above-listed performance measures are useful, they do not provide the information needed to assess the change that occurred because of your program. To measure change, you need data from at least two points in time, usually immediately before the program activity was initiated and again when it is reasonable to assume that the program would have had an impact (perhaps 2 to 6 months after initiating the activity). Examples of performance measures at this level of evaluation include the following: o The increase (or decrease) in the number of residents or types of residents served by a program activity. o The increase (or decrease) in residents' satisfaction with security or a particular violence prevention activity. o The decrease (or increase) in the number of violent acts occurring in the housing development. Comparing data from two points in time allows you to assess changes that may have occurred because of your violence prevention program. Obviously, collecting and analyzing data from two or more points in time is more costly than a one-time data collection. However, if resources permit, this type of evaluation will provide significantly more information than the one-time data collection approach. The most sophisticated level of evaluation -- and the most costly one -- allows you to determine that an observed change was caused by your program and not by some outside circumstances. This type of evaluation is the most costly one because it requires that the evaluation include data from sources not affected by your program, such as residents from the surrounding neighborhood. For example, if your residents express increased satisfaction with security at their housing development, and residents of the surrounding neighborhood also express increased satisfaction with their security, then the increased satisfaction might be due to an increase in police activity or other community initiatives not directly related to your violence prevention program. Alternatively, if the neighborhood residents express either no change in satisfaction or a decrease in their satisfaction with security, then you have fairly strong evidence that your program is responsible for your residents' increased satisfaction. As you can see, there is no single answer to the question, "What can an evaluation tell you?" posed earlier in this chapter. Evaluations can include a variety of performance measures and can provide different levels of information. The remaining chapters in this manual provide guidance in designing an evaluation that will meet both your information needs and budgetary constraints. ----------------------------------------------------- CHAPTER 3 -- WHO SHOULD CONDUCT YOUR EVALUATION? Evaluation is best thought of as a team effort. Although one person heads an evaluation team and has primary responsibility for the project, this individual will need assistance from others on your staff. An evaluation team will work together on the following tasks: o Determining the focus and design of the evaluation. o Developing the evaluation plan, performance indicators, and data collection instruments. o Collecting, analyzing, and interpreting data. o Preparing the report on evaluation findings. TYPES OF EVALUATION TEAMS You can assemble many types of evaluation teams. Three possible options include: o Hiring an outside evaluator (option 1). o Using an inhouse evaluation team supported by an outside consultant and program staff (option 2). o Using an inhouse evaluation team supported by program staff (option 3). Hiring an outside evaluator. Housing authorities typically do not have a research and evaluation staff and will probably need to hire an outside evaluator. This person would be supported by inhouse staff and would serve as a team leader. The evaluator could come from a research institute or a consulting firm. For more information on locating a candidate for this position, see the section later in this chapter on finding and hiring an outside evaluator. Using an inhouse evaluation team supported by an outside consultant and program staff. If you feel that you have sufficient staff resources to implement the evaluation but need assistance with the technical aspects, you may want to hire an outside consultant. In this situation, an inhouse evaluator would serve as the team leader and be supported by both program staff and the outside consultant. If there are research resources within your Public Housing Agency (PHA), you may want to consider this option. A consultant could support the evaluation by developing the evaluation design, conducting data analyses, and selecting or developing questionnaires. This person can also help you develop the evaluation plan and performance indicators. Using an inhouse evaluation team supported by program staff. If resources are available within your PHA (that is, if you have research staff, evaluators, or program personnel who can assist with the evaluation), you could recruit these individuals to serve as evaluation team members. The information below shows the possible advantages and disadvantages for each of the evaluation team options. Whatever team option you select, you will want to make sure that you or someone from the PHA become part of the team. Even if your role is limited to one of overall evaluation management, you will want to participate in all phases of the evaluation effort. ------------------------------------------------------ POSSIBLE ADVANTAGES AND DISADVANTAGES OF EVALUATION TEAM OPTIONS Option 1: Outside evaluator o Advantages: Results may be perceived by current or potential funders as more objective because evaluator does not have a stake in the evaluation findings; may have greater expertise and knowledge than agency staff about the technical aspects involved in conducting an evaluation o Disadvantages: Can be expensive to hire; may not have an adequate understanding of issues relevant to public housing or to residents of public housing Option 2: Inhouse evaluation team supported by outside consultant and program staff o Advantages: May be less expensive; using agency staff as team members increases the likelihood that the evaluation will be consistent with program objectives o Disadvantages: Greater time commitment required of staff may outweigh the cost reduction of using the outside professional as a consultant instead of a team leader; may produce a less influential/objective report Option 3: Inhouse evaluation team supported by program staff o Advantages: May be the least expensive option; promotes maximum involvement and participation of PHA staff and can contribute to building staff expertise for future evaluation efforts o Disadvantages: May not be sufficiently knowledgeable or experienced to effectively design and implement the evaluation; potential funders may not perceive evaluation results as objective ------------------------------------------------------ Deciding what team is best for you. This decision will be influenced most by the resources and capabilities of your PHA. To determine what internal resources are available, you can examine your staff's skills and experience in planning an evaluation, designing data collection procedures and questionnaires, and collecting and analyzing information. Below is a checklist to help you decide what type of team you may need. If you answer "no" to all of the resource questions, you may want to consider postponing your evaluation until you can obtain funds to hire an outside evaluator. You may also want to consider budgeting funds for an evaluation in your future program planning efforts. If your answer to question 1 is "yes" but your answer is "no" to all other questions, you will need maximum assistance in conducting your evaluation and option 1 (an outside evaluator with inhouse support) is probably your best choice. If you answer "no" to question 1 but "yes" to most of the other resource questions, then option 3 (inhouse staff only) may be an appropriate choice for you. However, if you plan to use evaluation findings to seek program funding, you may want to consider using option 2 (inhouse team with outside consultant) and try to obtain evaluation funds from other areas of your agency's budget. If your answer to question 1 is "yes" and the remainder of your answers are mixed (some "yes" and some "no"), then either option 1 or option 2 could be effective. ------------------------------------------------------ RESOURCES FOR APPROPRIATE TEAM SELECTION o Does your PHA have funds designated for evaluation purposes? o Have you successfully conducted previous evaluations of similar programs, components, or services? o Are there existing measures or indicators of performance currently in place? o Are existing program practices and information collection forms useful for evaluation purposes? o Can you collect evaluation information as part of your regular intake of residents? o Are there PHA staff who have training and experience in evaluation-related tasks? ------------------------------------------------------ FINDING AND HIRING AN OUTSIDE EVALUATOR Careful selection of an outside evaluator can mean the difference between a positive and a negative experience. A good place to start is to consider using someone that you or another PHA has worked with successfully on another project. You may want to begin the search by interviewing that particular person. Other public agencies within your community may also be good sources for this type of referral. PHAs located in or near a large city should have little trouble finding an evaluator. Many consulting firms have staff experienced in evaluation who have conducted housing-related evaluation studies. Most universities and colleges have faculty who can design and conduct evaluations. This manual includes a listing of national resources that may aid you in your search for experts. Before hiring an evaluator, you will want to know the following: o What level of experience does the evaluator have in the area of program evaluation? o Does the evaluator have any experience conducting evaluations in the area of public housing or in a related area? o Can the evaluator offer assistance in the full range of program evaluation activities including research design, data collection, data analysis, data interpretation, and dissemination of the results? o Will the evaluator present information in a way that will be useful to you? o How much will the project cost? o How will the housing staff be involved in the evaluation? o Is the evaluator willing to work closely with you, the housing agency staff, and the tenant association? A good outside evaluator will not dictate to you or your staff how the project will proceed but will instead work with you to conduct a successful evaluation. You should have input into determining the purpose of the evaluation, research questions, and performance indicators. Everyone on your evaluation team, including the outside evaluator, should be willing to collaborate with your PHA. There are four basic steps for hiring an evaluator. Step 1: Develop a statement of work. The first step in the hiring process is to develop a statement of work that details the general and specific requirements for the evaluator. General requirements should list the materials, services, and products to be provided by the evaluator. In creating the statement of work, you will need to know the types of evaluation activities you want the evaluator to perform. Evaluator responsibilities can involve developing an evaluation plan, providing progress reports, developing data collection instruments and forms, collecting data, analyzing data, and writing reports. The general requirements of the statement of work should list the frequency with which you expect to meet with the evaluator and the requirements for submitting written reports. Specific requirements should outline tasks. For example, if the tasks to be completed by the evaluator include a preliminary meeting; development of a research design, data collection plan, and sampling plan; data collection activities; data analysis; and a preliminary and final report, then the statement of work would specify each of these activities as a separate task. In addition, each task should have a timeframe for beginning and completion. Similarly, each task should have specific milestone dates for completion of work or submission of documents. Step 2: Locate sources for evaluators. Potential sources you can use to find an evaluator include: o Other public housing authorities that have used outside evaluators. PHAs or community-based organizations may have staff who have used outside evaluators and are able to recommend one, suggest methods of advertising for one, or provide other useful information. Contacting similar programs is one of the best ways to find an evaluator who understands your program, is sensitive to public housing residents, and can provide a useful evaluation that meets your needs. o Evaluation divisions of State or local agencies. Most State or local government agencies have planning and evaluation departments. In addition, some local police departments employ research staff. You may be able to use individuals from these agencies to work with you on your evaluation, or they may be able to direct you toward other organizations with experience in conducting outside evaluations. o Department of Housing and Urban Development (HUD). A number of offices within HUD fund national evaluation studies conducted by contractors. These offices, such as the Office of Policy Development and Research and the Office of Public and Indian Housing, may be able to direct you to contractors who are experienced in conducting research and evaluation studies. o Local colleges and universities. Departments of sociology, psychology, social work/social welfare, and public administration, as well as university-based research centers, are all possible sources within colleges and universities. Well-known researchers affiliated with these institutions may be readily identifiable. If they cannot personally assist you, they may be able to refer you to other individuals who are interested in performing local program evaluations. o Technical assistance providers. Some Federal grant programs include a national technical assistance provider. If your agency is participating in this kind of grant program, then asking for assistance in identifying and selecting an evaluator is appropriate and reasonable. o The public library. Reference librarians may be able to direct you to new sources. They can help identify local research firms and may be able to provide you with conference proceedings that list program evaluators. o Research and consulting firms. Many experienced evaluators are part of research and consulting firms. They may be more difficult to identify initially but are sometimes listed in the yellow pages under "Research" or "Marketing Research." They can also be located by contacting your State human services departments to get a listing of the firms that have bid on recent contracts for evaluations of State programs. o The United Way, American Public Welfare Association, Child Welfare League of America, Urban League, and local foundations. Your United Way or Urban League chapter, or area foundation staff and board members, may be able to provide you with names of local evaluators. They may also be able to provide insight on evaluations that were done well. o The American Evaluation Association. Many evaluators belong to the American Evaluation Association. Based at the University of Virginia in Charlottesville, Virginia, this organization is able to provide a list of members in your area for a fee. Additional information on the American Evaluation Association is contained in the resources section of this manual. o The American Criminal Justice Association (ACJA). ACJA has over 4,000 members who are concerned with the administration of criminal justice. ACJA publishes a semiannual journal, the Journal of the American Criminal Justice Association. o The American Society of Criminology (ASC). ASC is a national organization concerned with criminology, research, and education involving the etiology, prevention, control, and treatment of crime and delinquency. Its membership includes over 2,500 professional and academic criminologists, practitioners, and academicians in many fields of criminal justice. ASC publishes a newsletter, The Criminologist, six times a year and a journal, Criminology, four times a year. o The American Sociological Association. State and local chapters of the American Sociological Association may be able to direct you to members who conduct local program evaluations. Departments of sociology or criminology at local colleges and universities should be able to provide you with your State chapter's telephone number. Step 3: Advertise and request applications. Once you have developed a statement of work and identified possible sources for evaluators, you are ready to advertise for applications. Advertising in the local paper, posting the position at a local college or university, and working with your local government's human resources department are possible ways of soliciting offers. Agency newsletters, local and national meetings, and professional journals are additional sources where you can post your request for applications or an advertisement. You will want to advertise as widely as possible, particularly if you are in a small community or are undertaking an evaluation for the first time. Using several advertising sources will help ensure that you receive more than one response. The following page lists some suggestions for preparing an effective advertisement. If you have sufficient time, you may want to consider a two-step process for applications --advertising the position but also sending evaluators who respond to your advertisement more detailed information about your evaluation requirements. For example, you could send potential evaluators a brief description of the program and the evaluation questions you want to answer, along with a description of the size and location of the PHA where the evaluation will be conducted. This would give the candidate an opportunity to propose a plan that more closely corresponds to your program needs. ------------------------------------------------------ ELEMENTS OF AN EFFECTIVE ADVERTISEMENT Information to include in the advertisement: Your agency's name, address and phone number, and a contact person (this is optional; however, if you include one, this person should be prepared to handle inquiries). o Brief description of program to be evaluated, including program objectives, types of evaluation anticipated, available budget, and the period of performance for both the program and the evaluation. o Principal tasks of the evaluator. o Requested evidence of expertise (such as letters of introduction, a resume, a list of references, or a description of an evaluation recently completed or in progress). o Whether an interview is required (this is strongly recommended for select candidates). o Deadline for response. o Other requirements, such as whether you will accept a faxed application (and if you are a public agency, any other restrictions related to procurement). ------------------------------------------------------ Step 4: Review proposals and select an evaluator. The final step in hiring an evaluator is to review the proposals submitted and select an evaluator. In reviewing proposals, you will want to consider the candidate's writing style, type of evaluation plan proposed, experience working with your type of program and staff, familiarity with violence prevention initiatives and public housing residents, experience conducting similar evaluations, and proposed cost. Once you have narrowed your selection to two or three candidates, you will want to contact them to schedule inperson interviews. The interview is an important part of the selection process. It will allow you to determine whether you and the evaluator are compatible. You can review the criteria for selecting an evaluator discussed earlier in this chapter and use them to guide your questions during the interview. ------------------------------------------------------ A GOOD EVALUATOR: o Is willing to work collaboratively to develop an evaluation plan that meets your needs. o Is able to communicate in simple, practical terms. o Has experience evaluating similar programs and working with similar levels of resources. o Has experience with statistical methods. o Considers cultural differences. o Has the time available to do the evaluation. o Has experience developing data collection forms or using standardized instruments. o Is willing to work with a national evaluation team (if there is one). o Will treat data confidentially. ------------------------------------------------------ In most cases the ideal candidate will seem obvious after the interview is conducted, but sometimes it will be more difficult to decide. As you do for other job applicants, you will need to check references. It makes sense to follow whatever procedure your PHA has in place for contracting out for services. Remember to obtain the appropriate agency approval before notifying your chosen evaluator. If you currently operate a federally funded violence prevention program, check your grant requirements to determine whether you need Federal approval of your evaluator. What to do when you have trouble hiring an evaluator. Despite your best intentions, you may encounter difficulties, including one or more of the following: Few or no responses to your advertisement. PHAs located in isolated areas may only get a few responses to their advertisements. If this occurs you may want to contact 1 of the 80 local offices of the Department of Housing and Urban Development. They should be able to assist you by providing information about evaluators who have conducted similar evaluation studies in or near your community. None of the applicants is compatible with program philosophy and staff. If applicants do not match program needs, you may find it helpful to network with other PHAs or directors of violence prevention programs in your city or State who have worked with evaluators. A compatible philosophy and approach are most important, and tradeoffs between them and the proximity of the evaluator may be needed to find the right one. Money allocated for the evaluation is insufficient to hire a third- party evaluator. In this instance, you will need to generate additional funds for the evaluation or negotiate with your evaluator to donate his or her services (inkind service). Many evaluators are committed to their profession and regularly discount their fees or donate a portion of their time to evaluation projects. Another option is to negotiate with a college professor, using the evaluation dollars to pay for this person's time, while advanced degree students work under the direction of the professor to conduct some of the evaluation activities. There may be researchers in your community who have an interest in conducting research in the area of violence prevention. They may be interested in using information about your violence prevention initiative and may be willing to provide evaluation services in exchange for access to participant and program information. For example, you can allow a university professor to have access to program and participant records in exchange for evaluation services such as instrument development or data analysis. MANAGING AN EVALUATION HEADED BY AN OUTSIDE EVALUATOR Often when the decision is made to hire an outside evaluator, program managers and staff believe that the evaluation is out of their hands. This is not true. An outside evaluator cannot do the job effectively without the cooperation and assistance of PHA staff and residents. An evaluation is like any activity taking place within your agency; it needs to be managed. Creating a contract. One mechanism for effectively managing the evaluation is to prepare a written contract for specifying the evaluator's roles and responsibilities. The contract should be prepared once you have received the appropriate approval to hire someone. Both the evaluator and the PHA staff authorized to hire outside services will need to sign the contract. Your contract is a legally binding document which specifies the evaluation activities to be performed, the amount of time to complete the evaluation, and the cost. This document offers you protection by specifying who is expected to conduct the work and how the data that has been collected will be used. Every evaluation contract should include the following items: o Who will perform evaluation tasks. Some evaluators delegate many of their responsibilities to less experienced staff and have little contact with the client once the contract is signed. To protect yourself from this scenario, the contract should specify what percentage of time the evaluator and his or her staff will devote to the evaluation. o Who owns the evaluation data. In your contract, specify who has ownership of the data and to whom the information can be given. Release of information to outside parties should always be cleared with appropriate PHA staff. Any plans for publishing the evaluation results should be discussed and cleared before articles are written and submitted for publication. o Your expectations about contacts between the evaluator and PHA. It is very important for an outside evaluator to keep the PHA, program staff, and residents informed about the status of the evaluation. Regular communication allows the PHA and other concerned parties to make important changes on an ongoing basis. The contract should specify expectations about the frequency of meetings and ongoing reporting requirements. What to do if problems arise. Even with the best contracts, however, problems can arise during the course of the evaluation process. Examples of types of problems that may occur and possible solutions include the following: Evaluation approaches differ -- the program manager and evaluator do not see eye to eye. Try to reach a common ground where both programmatic and evaluation constraints and needs are met. If many reasonable attempts to resolve differences are made and severe conflicts still remain that could jeopardize the program or the evaluation, program staff should consider terminating the evaluation contract. This decision will need to be weighed carefully, as a new evaluator must be recruited and brought up to speed in midstream. In some situations, however, this may be the best option. Evaluation of the program requires skills or analyses for which you did not originally plan. You may find that your evaluator is in agreement with your assessment and is willing to add another person to the evaluation team who has expertise and skills needed to undertake additional or different analyses. Many times additional expertise can be added to the evaluation team by using a few hours of a consultant's time. For example, programmers or statisticians may be necessary to augment the evaluation team. The evaluator leaves, terminates the contract, or does not meet contractual requirements. If the evaluator leaves the area or terminates the contract, you will most likely be faced with recruiting a new one. In some instances programs have successfully maintained their ties to evaluators who have left the area, but this is often difficult. When your evaluator does not meet contractual requirements and efforts to resolve the dispute have failed, you should turn the case over to your procurement office. The evaluator does not have any experience working with low-income populations or community residents. It is not always possible to locate an evaluator who has the necessary experience in evaluation and experience working with public housing residents. It is a documented fact that very few evaluations have been conducted of violence prevention initiatives in public housing. You may have to educate the evaluator about the public housing development where your project is being offered and the characteristics of the surrounding community. The evaluator needs to understand how these factors may affect the evaluation and the questionnaires and procedures to be used. You may require that the evaluator work with members of the resident council and other key community leaders so that the evaluation is relevant to the participants' experiences and cultures. Remember, the outside evaluator works for you, and the implementation of a successful and valuable evaluation depends on you. ------------------------------------------------------ CHAPTER 4 -- HOW DO YOU PREPARE FOR AN EVALUATION? There are three basic steps to building a strong foundation for your evaluation. o Step 1: Decide what to evaluate. o Step 2: Specify your violence prevention program activities and the assumptions used in developing those activities. o Step 3: State your program objectives in measurable terms. STEP 1: DECIDE WHAT TO EVALUATE Your violence prevention program may be broad in scope, encompassing various activities, or it may focus on only one or two activities. You may choose to evaluate any or all parts of your program. Obviously, the more activities you include in your evaluation, the more resources will be required. If your resources are limited, you may want to narrow the scope of your evaluation. It is better to conduct an effective evaluation of a single program activity than to attempt a more comprehensive evaluation of many activities if you are lacking sufficient resources. STEP 2: SPECIFY YOUR VIOLENCE PREVENTION PROGRAM ACTIVITIES AND THE ASSUMPTIONS USED IN DEVELOPING THOSE ACTIVITIES Whether you decide to evaluate your entire program or a single program activity, you will need to develop a clear picture of each activity to be evaluated. Evaluation researchers usually refer to this picture as a "model." The model of your program should include the components listed below: o The assumptions on which the activity was developed. o A description of the program activities. o Short-term program outcomes. o Intermediate outcomes. o The overall goal of the program activity. The assumptions on which the activity was developed. Your decision to implement a particular violence prevention activity is based on a problem you identified and your belief that the activity will decrease or eliminate that problem. Listing these assumptions helps in understanding the objectives of the program activity. The examples below are some assumptions you might have that would lead to developing specific activities. ------------------------------------------------------ EXAMPLES OF ASSUMPTIONS AND POTENTIAL RESPONSES Assumption: A shortage of security measures contributes to violence in public housing Potential Response (Violence Prevention Activity): Volunteer resident security patrols are organized to work with local police Assumption: Youth drug use will decrease if alternative social activities are available Potential Response (Violence Prevention Activity): A 6-week summer program with jobs and recreation is developed Assumption: Youth residents need to be educated about how to respond to conflict Potential Response (Violence Prevention Activity): Role models are used to teach conflict resolution workshops ------------------------------------------------------ A description of the program activities. Listing the specific activities you plan to carry out (or have already implemented) identifies what it is you will be evaluating. This listing should include a description of the specific activity, who will perform the activity, who the activity is planned to reach, how many people the activity is designed to reach, and the timetable of the activity. Worksheet 4-1(a) shows what might be included in a violence prevention activity designed to develop resident security patrols. A blank worksheet, Worksheet 4-1(b), is provided for your use in planning a similar violence prevention activity. Completing worksheets such as this helps in specifying your implementation objectives and outcome objectives. o Step A: Enter each activity required to carry out the overall activity. o Step B-1: Enter PHA staff or residents involved in implementing the activity. o Step B-2: Enter any outside resources needed to implement the activity. o Step C: Enter the people expected to be affected by the activity. o Step D: Enter the time period in which the activity will be started and the duration of the activity. Short-term program outcomes. These are the results you expect to see in the near future. They include areas such as keeping to the planned schedule and the number of people receiving the service as well as monitoring the effects of the program on participants. Intermediate outcomes. These are the long-term results you expect to achieve after the program has been fully operational. The overall goal of the program activity. This is the expected impact of the program. ------------------------------------------------------ WORKSHEET 4-1(A): LISTING OF PROGRAM COMPONENTS Activity and Expected Outcome: Develop Volunteer Resident Security Patrols to Decrease Violent Acts in the Housing Development 1. A. What you will do: Meet with local police to plan program and training B. Who will do it: 1. PHA -- Project administrator; Head of security 2. Others -- Local Police C. Whom you will reach/how many: Not applicable D. Duration/Timeline of activity: 1 month 2. A. What you will do: Meet with resident council B. Who will do it: 1. PHA -- Project administrator; Resident council 2. Others -- C. Whom you will reach/how many: D. Duration/Timeline of activity: 1 month (concurrent with number 1) 3. A. What you will do: Announce plan to residents B. Who will do it: 1. PHA -- Project administrator; Resident council 2. Others -- C. Whom you will reach/how many: D. Duration/Timeline of activity: Beginning second month 4. A. What you will do: Enlist resident volunteers B. Who will do it: 1. PHA -- Project administrator; Resident council 2. Others -- C. Whom you will reach/how many: D. Duration/Timeline of activity: Complete by end of third month 5. A. What you will do: Train resident volunteers B. Who will do it: 1. PHA -- 2. Others -- Local police C. Whom you will reach/how many: Nine three-person teams; nine "backups" D. Duration/Timeline of activity: 1 week; complete by mid-fourth month 6. A. What you will do: Operate tenant security patrols between 7 p.m. and 11 p.m. every day B. Who will do it: 1. PHA -- Residents 2. Others -- Local police during first week C. Whom you will reach/how many: Potentially affects all project residents D. Duration/Timeline of activity: Ongoing 7. A. What you will do: Replace resident volunteers as needed B. Who will do it: 1. PHA -- Project administrator; Resident council 2. Others -- Local police as needed C. Whom you will reach/how many: Selected project residents D. Duration/Timeline of activity: Ongoing ------------------------------------------------------ WORKSHEET 4-1(B): LISTING OF PROGRAM COMPONENTS A: Enter each step required to carry out the overall activity. B-1: Enter PHA staff or residents involved in implementing the activity. B-2: Enter any outside resources needed to implement the activity. C: Enter the people expected to be affected by the activity. D: Enter the time period in which the activity will be started and the duration of the activity. Activity and Expected Outcome: A. What you will do B. Who will do it B-1. PHA B-2. Others C. Whom you will reach/how many D. Duration/Timeline of activity ------------------------------------------------------ STEP 3: STATE YOUR PROGRAM OBJECTIVES IN MEASURABLE TERMS Having clearly defined the steps needed to implement the violence prevention activity and identify the short-term and intermediate outcomes, you now need to develop specific criteria -- or performance measures -- that you will use to determine that the program objectives were achieved. Program objectives include program implementation measures, such as how you will implement your violence prevention activity, as well as the expected result of the activity. Using our example of tenant security patrols, the outcome of "decreasing violent acts in the housing development" needs to be made more specific. For example, how much of a decrease are you expecting in assaults? In rapes? In murders? The specific goal you define should be realistic, but it should also be indicative of effectiveness. For example, it may be unrealistic to expect that no assaults will occur, but a decrease of only one percent is not an indication of program effectiveness. You can see that such performance measures are subjective. Knowledge of your housing development, the residents, and other community factors will influence how you define the performance measures. Performance measures of short-term and intermediate goals may differ for the same goal. Again, using the example of tenant security patrols, your short-term goal of decreases in assaults may be 10 percent, but the intermediate goal may be 50 percent based on the assumption that as knowledge of the presence of tenant patrols becomes more widespread, the number of assaults will decrease further. Worksheet 4-2(a) provides examples of short-term and intermediate performance measures; Worksheet 4-2(b) is a blank worksheet provided for your use. Defining the outcome objective in measurable terms identifies the information you will need to evaluate your program activity. Note, for example, that whenever you want to measure change, you will need information at two points in time. Chapter 6 discusses collecting information for the evaluation. ------------------------------------------------------ WORKSHEET 4-2(A): DEFINING PERFORMANCE MEASURES Activity: Develop Volunteer Resident Security Patrols to Decrease Violence Overall Objective: Decrease violent acts occurring on housing development property Performance Measures for Overall Objective o Decrease assaults and rapes by 10% per month after 2 months' operation o Decrease robberies by 10% per month after 2 months' operation o Decrease murders by 10% per month after 2 months' operation 1. Recruit resident volunteers Short-Term Performance Measure: o Within 2 months of recruitment, have 20 resident volunteers Intermediate Performance Measure: o Within 9 months of recruitment, have 30 resident volunteers 2. Train resident volunteers Short-Term Performance Measure: o Complete training of first 12 volunteers within 2 weeks of recruitment Intermediate Performance Measure: o Conduct training of replacement volunteers on an ongoing basis 3. Enlist cooperation of local police Short-Term Performance Measures: o Planning: Local police representative attends planning sessions o Training: Local police participate in developing and conducting training Intermediate Performance Measure: o Meet at least monthly with local police representative to review progress and/or problems with security patrols 4. Enlist cooperation of resident council Short-Term Performance Measures: o Planning: Council representative attends planning sessions o Recruitment: Resident council actively participates in recruitment (specific activities to be identified in consultation with resident council) Intermediate Performance Measure: o Meet monthly with resident council to receive feedback on effectiveness of tenant security patrols 5. Establish security patrols Short-Term Performance Measures: o Have four three-person security teams operational within 3 months of initiating recruitment Intermediate Performance Measure: o Have seven three-person security teams operational within 9 months of initiating recruitment ------------------------------------------------------ WORKSHEET 4-2(B): DEFINING PERFORMANCE MEASURES Activity: Overall Objective: Performance Measures for Overall Objective Activities Short-Term and Intermediate Performance Measures Short-Term Measures Intermediate Measures ------------------------------------------------------ CHAPTER 5 -- DEVELOPING AN EVALUATION PLAN An evaluation plan is a written document that states the objectives of the evaluation, the questions that will be answered, the information that will be collected to answer these questions, and when collection of information will begin and end. You can think of the evaluation plan as the instructions for the evaluation. This plan can be used to guide you through each step of the evaluation process because it details the practices and procedures for successfully conducting your evaluation. Once the evaluation plan has been completed, it is a good idea to have it reviewed by selected individuals for their comments and suggestions. Potential reviewers include: o Public Housing Agency (PHA) administrators who can determine whether the evaluation plan is consistent with the agency's resources and evaluation objectives. o Housing staff who can provide feedback on whether the evaluation will create an excessive burden for them and whether it is appropriate for residents. o Professional evaluators. This chapter describes the components for an evaluation plan and provides an outline for preparing a plan. Although you may never need to develop one without assistance, it is helpful for you to know what a plan is and how it is being used by the evaluator you select. The information contained in this chapter will help you: o Work with an experienced evaluator (either an outside evaluator or someone within your PHA) to develop a plan. o Obtain a basic understanding of what should be included in an evaluation plan to assist you as you review one. A sample evaluation plan outline that may be used as a guide appears on the following pages. The major sections of the outline are: o Section I: A description of the evaluation framework which specifies what you want to evaluate, what questions are to be addressed in the evaluation, and the timeframe for conducting the evaluation. o Section II: A description of the program implementation objectives. o Section III: A description of the program outcome objectives and performance measures. o Section IV: Procedures for managing and monitoring the evaluation. ------------------------------------------------------ SAMPLE EVALUATION PLAN OUTLINE I. Evaluation Framework A. What you are going to evaluate. 1. The initial program model (assumptions about target population, interventions, short-term outcomes, intermediate outcomes, and final outcomes). 2. Implementation objectives (stated in general and then measurable terms). a. What you plan to do, when, and how. b. Who will do it. c. Participant population and recruitment strategies. 3. Outcome objectives (stated in general and then measurable terms). 4. Context for the evaluation. B. Questions to be addressed in the evaluation. 1. Are implementation objectives being attained? If not, why not (that is, what barriers or problems have been encountered)? What kinds of procedures facilitated implementation? 2. Are outcome objectives being attained? If not, why not (that is, what barriers or problems have been encountered)? What kinds of procedures facilitated attainment of outcomes? a. Do outcomes vary as a function of program features? Which aspects of the program contributed the most to achieving expected outcomes? b. Do outcomes vary as a function of characteristics of the residents or staff? C. The timeframe for the evaluation. 1. When data collection will begin and end. 2. How and why timeframe was selected. II. Evaluating Implementation Objectives -- Procedures and Methods Question 1: Are Implementation Objectives Being Attained and, If Not, Why Not? A. Objective 1: [State objective in measurable terms. Example: Local police representative will attend all planning and resident training sessions.] What to include: 1. Type of information needed to determine if objective 1 is being attained and to assess barriers and facilitators (that is, performance indicators). Example: Number of planning meetings attended by local police representative. 2. Sources of information. Include in your plans procedures for maintaining confidentiality of the information obtained during the evaluation. 3. How sources of information were selected. 4. Timeframe for collecting information (dates when the data collection is planned to begin and end). 5. Methods for collecting the information (that is, records reviews, interviews, paper and pencil questionnaires, and observations). 6. Methods for analyzing the information to determine whether the objective was attained (that is, tabulation of frequencies and assessment of relationships between or among variables). B. Objective 2: [Repeat the same information as in 1-6 of objective 1 above.] C. Objective 3: [Repeat the same information as in 1-6 of objective 1 above.] III. Evaluating Outcome Objectives -- Procedures and Methods Question 2: Are Outcome Objectives Being Attained and, If Not, Why Not? A. Objective 1: [State outcome objective in measurable terms. Example: Decrease robberies on housing development property by 10 percent after 2 months of active security patrols.] What to include: 1. Types of information needed to determine if objective 1 is being attained (that is, what evidence will you use to demonstrate the change?). Example: Number of robberies committed per month before and after initiation of security patrols. 2. Sources of information (that is, housing staff, residents, PHA staff, and housing managers) and sampling plan, if relevant. 3. How sources of information were selected. 4. Timeframe for collecting information (dates when the data collection is planned to begin and end). 5. Methods of collecting that information (for example, questionnaires, observations, surveys, and interviews) and plans for pretesting information collection methods. 6. Methods for analyzing the information to determine whether the objective was attained (that is, tabulation of frequencies and assessment of relationships between or among variables using statistical tests). B. Objective 2: [Repeat the same information as in 1-6 of objective 1 above.] C. Objective 3: [Repeat the same information as in 1-6 of objective 1 above.] IV. Procedures for Managing and Monitoring the Evaluation What to include: 1. Procedures for training staff to collect evaluation-related information. 2. Procedures for conducting quality-control checks of the information collection process. 3. Time lines for collecting, analyzing, and reporting information, including procedures for providing evaluation-related feedback to housing managers and staff. ------------------------------------------------------ SECTION I: THE EVALUATION FRAMEWORK This section of the evaluation plan presents the model for assessing your program activities (see chapter 4), program objectives, evaluation questions, and the timeframe for the evaluation (that is, when you will begin and end collection of evaluation information). Section I should also include a discussion of the context for the evaluation, particularly the aspects of the PHA, program staff, and residents that may affect the evaluation. If an outside evaluator is preparing the plan, the evaluator will need your help to prepare this section to ensure that the evaluation is tailored to the needs of your PHA and the residents. SECTION II: EVALUATING IMPLEMENTATION OBJECTIVES This section should provide detailed descriptions of what you plan to do, how you plan to do it, and who it is you want to reach. This information will be used to answer evaluation questions pertaining to your implementation objectives, such as: Are implementation objectives being attained? If not, why not? What barriers or challenges have been encountered? What has facilitated attainment of these objectives? For each objective, the evaluation plan must describe the following: o Types of information needed. o Sources of information. o Criteria for selecting information sources. o Methods for collecting information, such as questionnaires and procedures. o Timeframe for collecting information. o Methods for analyzing information. Types of information needed. Any information that is collected about your program activities or residents can be considered evaluation data. The types of information needed will be guided by the program objectives you seek to assess. For example, when your objective concerns what you plan to do, you will need to collect information on the types of services, activities, or initiatives that are developed and implemented; who received services; and their duration and intensity. If the objective of your PHA is to provide increased security patrols at two sites, you will need to collect the following information: o Number of resident volunteers. o Number of hours in which security patrols operate. When the objective concerns who will participate, you will need to collect information about residents' characteristics, the number of residents, how they were selected/recruited, barriers encountered in the selection/recruitment process, and factors that facilitated selection/ recruitment. If the objective is to involve 50 residents in a 6-week crime and drug reduction program, for example, you will want to collect the following information: o Age, sex, and race of participants. o Number of participants previously involved in criminal or drug activity. o Number of residents who are participating. o Information on how the participants learned about the program. o Amount of time residents participate in the program. o Number of residents who successfully complete the program. Sources of information. This refers to where, or from whom, you will obtain evaluation information. Again, the selection of sources will be guided by the objective you are assessing. For example: o Information on services can come from program records or from interviews with program staff. o Information on residents and recruitment strategies can come from program records and interviews with staff and residents. o Information about barriers and facilitators to implementing the program or program activities can be obtained from interviews with relevant staff. This section of your plan should also include a discussion of how you will maintain the confidentiality of information you obtain from your sources. In addition, it is wise to develop consent forms for those residents being asked to participate in the evaluation. The consent form should include a description of the evaluation objectives and how the information will be used. More information on maintaining confidentiality and a sample informed consent form appear in chapter 6. Criteria for selecting information sources. If your initiative has a large number of staff members and/or residents, you can reduce the time and cost of the evaluation by including only a sample of them as sources for evaluation information. Sampling is a statistically reliable way of identifying a number of persons from the entire group of program participants who are representative of the group. An experienced evaluator will be able to advise you as to whether or not you should select a sample for your evaluation. There are a variety of methods for sampling your sources. o You can sample by identifying a specific timeframe for collecting evaluation-related information and including only those residents who participate during that timeframe. o You can sample by randomly selecting the residents (or staff) to be used in the evaluation. For example, you might assign case numbers to residents and include only the even-numbered cases in your evaluation. o You can sample based on specific criteria, such as length of time with the program (for staff) or characteristics of residents, such as age, gender, size of family, and length of time in complex. Methods for collecting information. For each implementation objective you are assessing, your evaluation plan must specify what information will be collected (such as questionnaires and procedures) and who will collect it. To the extent possible, the collection of this information should be integrated into ongoing program operations. For example, in training programs, the registration forms for residents and the initial assessments of participating residents can be used to collect evaluation-related information as well as information relevant to conducting the training. There are a number of methods for collecting information including structured and open-ended interviews, paper and pencil inventories or questionnaires, observations, and systematic reviews of agency records or documents. The methods you select will depend upon the following: o The evidence you need to establish that your objectives were attained. Performance measures make up this needed evidence. They are the indicators that the program activities reached their intended goals. o The information sources, which can be questionnaires, interviews, case records, or observations. o Your available resources. You will need to determine if you have the staff and funds available to collect the needed data. Chapter 6 provides more information on these sources. The questionnaires or forms that you plan to use to collect evaluation information are usually included as part of your evaluation plan. You will not want to begin an evaluation until you have developed or selected all of the data collection instruments you plan to use. Developing or selecting questionnaires to use for the evaluation may require the assistance of an experienced evaluator. Timeframe for collecting information. Although you will have already specified a general timeframe for the evaluation, you will need to specify one for collecting data relevant to each implementation objective. Times for data collection will again be guided by the objective being assessed. Methods for analyzing information. This section of your evaluation plan describes the practices and procedures for use in analyzing the evaluation information. For assessing program implementation, the analyses will be primarily descriptive and may involve tabulating frequencies (of services and resident characteristics) and classifying narrative information into meaningful categories, such as types of barriers encountered, strategies for overcoming barriers, and types of facilitating factors. An experienced evaluator can help your evaluation team design an analysis plan. More information on analyzing program implementation information is provided in chapter 7. SECTION III: EVALUATING OUTCOME OBJECTIVES AND PERFORMANCE MEASURES The practices and procedures for evaluating whether the outcome objectives of your program have been met are similar to those for evaluating implementation objectives. To evaluate outcome objectives you will probably use both qualitative and quantitative performance measures. The performance measures will enable you to answer the following questions: o Did residents and/or the community demonstrate changes in knowledge, attitudes, behaviors, or awareness? Performance indicators could include self-reported increased knowledge, a change in attitude, a reduction in criminal activity or violent behavior, and increased awareness of their own or others' violent behavior. o Are the changes the result of the program's activities? Are the reported changes in knowledge, attitudes, behavior, or awareness a direct result of your program? Did the changes occur after involvement in the program? Are there other factors that may have influenced the changes? Two commonly used evaluation designs that can help you to answer these questions are: o Comparison of conditions before and after a program is established. o Comparison of conditions before and after a program is established, using a comparison group. A comparison of conditions before and after the violence prevention program is implemented requires that you collect information at least twice -- once before the program is implemented and then again either sometime after the program has been in effect (when you could expect the program to have had a measurable impact) or after the program has ended. You can collect outcome information as often as you like after the program has been implemented, but you must collect it on residents and/or the community before implementing the program. This information is called baseline information and is essential for demonstrating that a change occurred. If you are implementing an education or training program, this type of design can be effective for evaluating immediate changes in participants' knowledge and attitudes. In these types of programs, you can assess residents' knowledge and attitudes prior to the training and immediately after training with some degree of certainty that any changes observed resulted from your interventions. A comparison of conditions before and after the violence prevention program is implemented using a comparison group also requires that you collect information at a minimum of two points in time and that you collect information from individuals (or about a housing development or neighborhood) not affected by your violence prevention program. The purpose of a comparison group is to determine if changes you find in your residents or housing development conditions are attributable to your program and not to some other reason. Comparison data might be obtained from the following: o Housing development residents not participating in the violence prevention program but who are similar to program participants in most other ways; for example, male teenagers not participating in a midnight basketball program. o Crime statistics from a nearby housing development that has characteristics similar to your housing development's characteristics, such as type of development building (that is, highrise or garden apartments), number of teenagers, and level of criminal activity before your violence prevention program was implemented. There are obvious cost considerations when including a comparison group in your evaluation design. You must be able to identify a group of individuals or a housing development or neighborhood that is similar to your residents, development, or neighborhood. You must be able to obtain data from such individuals or about the development or neighborhood. Both of these tasks will require some research and additional data collection activity. Although there are additional costs, the information from a comparison group will provide significantly more evidence concerning the effectiveness of your program if your program participants have more positive scores on performance measures than the comparison group. You can state with more certainty that your program was effective in bringing about the observed change and that this is not due to some other reason. Pretesting information collection instruments. Your evaluation plan will need to include a discussion of your plans for testing out your questionnaires before using them for evaluation. This process is commonly referred to as pretesting. Chapter 6 provides information on pretesting instruments. Analyzing participant outcome information. Your plan for evaluating outcomes should include a description of how you intend to analyze the data that has been collected. The analyses are intended to answer the questions about whether change occurred and whether changes that occurred can be attributed to your program. SECTION IV: PROCEDURES FOR MANAGING AND MONITORING THE EVALUATION This section of the evaluation plan can be used to describe the procedures you intend to use to manage the evaluation. If PHA staff are to be responsible for data collection, you will need to describe how they will be trained and monitored. You may want to develop a data collection manual that describes the processes and procedures for staff to use. This will ensure consistency in information collection and will be useful for staff who are hired after the evaluation begins. Various types of monitoring activities are discussed in chapter 6. The final section of your evaluation plan should include a discussion of how you will handle any changes in program operations that may occur during the time the evaluation is being conducted. For example, if a particular component is discontinued or added to the program or program activities, you will need to have procedures for documenting when this change occurred, the reasons for the change, and whether particular residents were involved in the program prior to or after the change. This will help you determine whether the change had any impact on attainment of expected results and/or outcomes. After your evaluation plan is complete and the questionnaires have been pretested, you are ready to begin collecting evaluation information. The following chapter discusses information collection. ------------------------------------------------------ CHAPTER 6 -- HOW DO YOU GET THE INFORMATION YOU NEED FOR YOUR EVALUATION? After you have completed your evaluation design, you are ready to begin collecting information, the process commonly referred to by evaluators as the data collection phase. This chapter will provide you with steps and suggestions to help you in this process. The information you collect will provide some immediate feedback on whether the program has been effective in reaching its objective(s). Information collection generally consists of six steps. These steps were introduced earlier in this manual as part of the process of developing your evaluation plan. STEP 1: DETERMINE THE KINDS OF INFORMATION YOU NEED FOR YOUR EVALUATION Your outcome objectives, if stated in measurable terms, will guide the decision about the kinds of information needed. As discussed in chapter 2, both quantitative and qualitative performance measures will be needed to describe program outcomes. You will need to collect information that can be used to demonstrate that your program or program activities have been effective. For example if one of your general participant outcome objectives is to integrate children from the housing complex into ongoing leagues or other competitive activities in the surrounding community, that objective, stated in measurable terms, will determine the types of information needed. That objective stated in measurable terms may be: "To increase the number of extracurricular activities that youths participate in and to reduce their incidence of behavioral problems as reported by crime statistics, their parents, and self-reports by the youngsters themselves." Note that any given objective may have multiple measures. In this example, the types of information you will need to assess attainment of this objective are: o Program participation rates. o Number and type of interactions with the juvenile justice system. o Parental observation of improvements in their children's behaviors. o Student self-reports of their lack of involvement with the criminal justice system. Specifying the information needed will ensure that you do not collect more than you need. It also keeps the cost and time required for the evaluation to a minimum. Given that most Public Housing Agencies (PHAs) have limited resources, you will want to collect only information that is actually needed for the evaluation. STEP 2: IDENTIFY THE BEST SOURCES FOR THE INFORMATION YOU NEED Every data element usually has a range of potential information sources, including: o Program records, such as case records and program pretest and posttest scores on any tests given. o Program management information systems. o Program reports and documents. o Program staff. o Program participants. o Family members of participants. o Members of a control or comparison group. o Staff of collaborating agencies. o Records from other agencies, such as health agencies, schools, criminal justice agencies, mental health agencies, child welfare agencies, and social service agencies. o Community leaders. o Outside experts. o The general public. o National databases. To decide the best sources for information, ask yourself three questions: 1. What sources are likely to provide the most accurate information? 2. What sources are the least costly or time consuming? 3. Does the information collection pose an undue burden on the sources? Having accurate data sources for the evaluation is the most important factor. For example, it may be less costly or time consuming to obtain information about services from interviews with program staff, but staff may not provide as accurate information about services as case records could. When you interview staff, you are relying on their memories, but when you review case records, you should be able to obtain information about what actually did happen. If you choose to use case records to obtain evaluation information, however, you will need to make sure that staff are consistent in recording evaluation information in the records. Sometimes case record reviews can be difficult to use for evaluation purposes because they are incomplete or do not report either participant or service-related information in a consistent manner. STEP 3: SELECT OR DEVELOP DATA COLLECTION INSTRUMENTS There is a variety of types of data collection instruments, including: o Questionnaires or surveys (mail, inperson, or telephone). o Case record extraction forms. o Observation forms. Choosing a particular survey methodology will depend on your PHA's budget, the number and type of questions you need to ask, number of residents being surveyed, availability of staff and/or residents to conduct the surveys, and other factors. Types of surveys include mail, telephone, and inperson. Most likely, you will need to develop your own questionnaire to collect information that specifically addresses your particular program objectives. This is not a complicated process. The discussion on the following pages outlines the steps in developing surveys. Each type of data collection instrument has both advantages and disadvantages, which are listed below. ------------------------------------------------------ ADVANTAGES AND DISADVANTAGES OF DIFFERENT DATA COLLECTION INSTRUMENTS Inperson interview Advantages: o Enables the interviewer to establish rapport o Enables the interviewer to observe whether or not the question has been understood o Enables the interviewer to probe for explanations Disadvantages: o Must be scheduled when the interviewee is available o Must have trained staff administering the questionnaire o May be costly to administer Telephone interview Advantages: o Allows the interviewer to make frequent, inexpensive attempts to contact the interviewee Disadvantages: o Resident must have a telephone o Does not allow interviewer to quickly ascertain if the person is confused by the question Mail instrument Advantages: o Allows residents to complete the survey at their convenience o Not costly Disadvantages: o Residents must be motivated to complete and return the instrument within the time specified o Does not work well with low-literacy populations who may not be able to read o Generally lower response rates Case record extraction form Advantages: o Enables you to obtain secondary information about program participants (for example, actual dates of participation, accurate counts of number and types of services received) Disadvantages: o Confidentiality constraints may not allow you access to program records o Case record information must be recorded accurately and consistently o Case record information may be incomplete Observation form Advantages: o Allows direct contact with ongoing program activities o Enables you to compare what was planned with what is actually occurring Disadvantages: o You must be able to conduct onsite program observations o The observer's presence may influence how the program is being conducted during observation ------------------------------------------------------ Developing a mail survey. The success of a mail survey depends on obtaining the cooperation of the resident, who must be motivated to complete and return the survey within the time specified. Therefore, the design of any mail survey must convey to the resident the following: o Purpose and importance of the survey (this can be done through a cover letter). o Clear instructions. o Confidentiality. o Established timeframe for completing and returning the survey. The cover letter. One of the simplest ways to convey the importance of the survey is through a cover letter. If possible, the letter should be personalized with the resident's name rather than using a generic "Dear Resident." The cover letter should spell out the purpose and objectives of the survey, who is being asked to respond to the survey and why, and how the resident was selected to receive the questionnaire. The letter should be signed by an official such as the PHA project manager or some other person to show that the study has been given a high profile. The letter must include a contact person and telephone number in case the resident has questions. In addition, the letter should include a preaddressed, stamped envelope for returning the survey. An example of a cover letter appears on the following page. ------------------------------------------------------ SAMPLE COVER LETTER Violence Prevention Program Resident Survey [Date] [Name] [Address] [Address] Dear [Name of Resident]: I am writing to ask you to be a part of an important study being conducted by the [name of PHA]. The purpose of this study is to learn more about how the [name of program] has affected residents and their families. We would like to get feedback from residents about the violence prevention services that have been provided through the [name of program]. By completing the enclosed questionnaire, you can help us to determine if this program is working and, if necessary, to make changes that will benefit you and other residents. You were randomly selected to complete this questionnaire because you participated in the [name of program]. Your answers will be kept strictly confidential and will not in any way affect your current or future eligibility for housing at [name of PHA] or any other services you may be receiving. Please take a few moments to complete the enclosed survey and return it in the enclosed pre- addressed, stamped envelope by [date]. Your prompt return of the enclosed questionnaire is extremely important to the success of this evaluation. If you have any questions about the survey, please call [name of contact person] at [telephone number]. Sincerely, [Name of PHA] ------------------------------------------------------ Clear instructions for completing the survey. The instructions for completing a self-administered mail survey are extremely important to ensure that the resident reports accurate data. In self- administered questionnaires the question wording and sequencing can have an effect on data quality. The resident who is completing the survey must first comprehend the questions. To ensure that the resident understands each question and each of the possible answers, you will need to be certain that: o Instructions are clear and concise. Residents or other respondents should be able to clearly understand how to answer the questions. o Simple skip instructions guide the respondents through the questions. Skip instructions enable the respondents to maneuver their way through the questionnaire and should be prominently placed so that the respondents do not overlook them. o Boxed instructions are included, as needed, to bring attention to certain instructions. o Instructions are consistent throughout the survey and are visually emphasized. A questionnaire must be laid out consistently to ensure that the respondents understand the instructional patterns of the survey. (The use of bold typeface or italics is one way to bring attention to an item.) o The format of questions is consistent. Consistency of the survey instrument is critical if the respondents are to follow instructional patterns. o Critical questions do not come at the end of the survey. Using existing data collection instruments. Many existing instruments can be used to assess participant outcomes. The Center for the Study & Prevention of Violence (CSPV), University of Colorado at Boulder maintains VIOEVAL, a database that stores reference information about survey instruments. Examples of survey references include the self-reporting delinquency scale, the self- efficacy scale, the violence scale, and the self-esteem scale. Contact information for CSPV is included in the resources section at the end of this manual. Before you select a standardized assessment instrument for your evaluation, be sure to ask an outside professional for advice, and ask administrators of similar programs about their experiences using the instrument. In addition, you should review each item on the instrument to ensure that the information it asks for is consistent with your expectations about how program participants will change. If you are unable to find an appropriate existing instrument to assess participant outcome objectives, you will need to develop your own. Developing your own outcome assessment instrument is a complex process and may require the assistance of an expert to ensure its usefulness for your evaluation. Criteria for selecting or developing an appropriate instrument. Whether you decide to use an existing instrument or to develop your own, the instrument you use should meet the following criteria. o It should include questions that can be used to measure the concepts addressed or affected by your program. For example, if you are providing alcohol and drug abuse prevention training, you would want an instrument to measure changes in knowledge of alcohol and drug abuse. o It should be appropriate for your participants in terms of age or developmental level, language, and ease of use. Questions should be written in simple and easy-to-understand language. o It should respect and reflect the participants' cultural backgrounds. The definitions, concepts, and items in the instrument should be relevant to the participants' community and experience. [See the sidebar on the following page.] o It should be able to be completed in a reasonable timeframe that is not a burden. ------------------------------------------------------ CULTURAL RELEVANCE Your forms should be sensitive to issues and concerns of your participant group. You will want to know: o Do participants understand the terms used on the forms? Is the language similar to their everyday language? Is the language at a level everyone can understand? o Are concepts that are dealt with in the questionnaires and forms familiar to participants? o Are questions asked in a thoughtful and nonintrusive manner? o Do the questions support the values of the participant group? ------------------------------------------------------ STEP 4: ESTABLISH PROCEDURES FOR COLLECTING INFORMATION Once you decide what type of instrument you will use to collect evaluation information, you must establish a set of procedures to ensure that this information will be collected in a consistent and systematic manner. Everyone involved in collecting evaluation information must be trained in these procedures: o When you will collect the information. The timeframe during which the data is to be collected must be clearly specified. There is some information that may need to be collected before the program starts and other information that needs to be collected at the end of the program. Having the timeframe spelled out will ensure that the information is collected as scheduled. o Where you will collect the information. You will need to determine the sources from which the information will be collected. In some instances you may be using program records, while in other instances you may be relying on participants coming to a specific location to complete the survey instrument or to participate in a group discussion about their experiences. You will need to determine where you will collect the information and convey this to program participants. o Who will collect the information. This responsibility must be clearly specified or you will risk having some information collection activities fall through the cracks. In some situations you will need to be sure that information collectors meet certain criteria. For example, they may need to be familiar with the culture or the language of the individuals they are interviewing or observing. If the survey is being administered by interviewers (for example, residents hired and trained to conduct interviews), those persons must be properly trained to administer the survey. Training will ensure that the interviewers are familiar with the survey instrument. ------------------------------------------------------ INFORMED CONSENT An important part of implementing an evaluation is ensuring that your participants are aware of what you are doing and that they are cooperating with the evaluation voluntarily. People should be allowed their privacy, and therefore they have the right to refuse to give any personal or family information, the right to refuse to answer any questions, and even the right to refuse to be a part of the evaluation at all. The best way to handle this is to explain the evaluation activities and what will be required of them as part of the evaluation effort. People should be told that their names will not be used and that the information they provide will not be linked to them. Then have them sign an informed consent form that documents that they understand the scope of the evaluation, that they agree (or disagree) to participate, that they understand what is expected of them, and that they understand that they have the right to refuse to give any information and may drop out of the evaluation at any time. If children are involved, then you must get the permission of their parents or guardians before the children participate in the evaluation. A sample informed consent form appears at the end of this chapter. ------------------------------------------------------ How you will collect the information. You have already decided what instrument to use to collect the information. However, you also need to establish the procedures for administering the instrument. Will it be administered to a group or to individuals? If you are collecting information from children, will other family members be present? If you are collecting information from individuals with a low level of literacy, will the data collectors read the items to them? The methods you use will depend in large part on the type of program and the characteristics of the participants. Violence prevention training and education programs, for example, may have participants complete the instrument in a group setting. It is very useful to develop a manual that describes exactly what is expected in the information collection process. This helps maintain the quality of the evaluation effort, especially when new staff are hired. Case record extraction form. If you are using program records as a source of information, you will need to develop a case record extraction form to use. This form provides a place for recording all of the information from participant records needed for the evaluation. Similarly, if you plan to interview program staff, you must develop interviews that focus specifically on the evaluation's information needs. Sometimes, in developing these instruments, you or your evaluator may decide that certain types of information would be "interesting" to collect. However, if the information does not relate directly to your program or outcome objectives, you should resist this urge. Observation forms or checklists. Another popular evaluation technique is the observation form or checklist. These tools are useful to record information about the environment where the program is located, the number of residents participating when the program is visited, and the behaviors of participating residents and service providers or staff. ------------------------------------------------------ STEP 5: PRETEST THE INFORMATION COLLECTION INSTRUMENTS AND PROCEDURES Before you begin collecting evaluation information, you will need to pretest your instruments and procedures. The pretest will determine whether the instruments and procedures obtain the information that you want, are not excessively burdensome, and are appropriate for your participant population. Use the pretest information to make any necessary revisions before you begin your evaluation. The kinds of information that can be obtained from a pretest include: o How long it takes to complete interviews, obtain information from participant records, or fill out questionnaires. o Whether self-administered questionnaires can be completed by participants without assistance. o Whether the records you need are readily available. o Whether you can collect the necessary information in the established timeframe. o Whether letters to inform participants of the evaluation or any required consent forms can be easily delivered. You may pretest your instruments with a small number of individuals or program records. You should instruct individuals involved in the pretest to take notes and make comments on the process of using the instruments. These notes and comments may be reviewed to determine whether changes are needed in the instruments or procedures. You must also review the completed instruments to assess the number of incomplete answers, unlikely answers, comments included in the margins, or other indicators that revisions are necessary. Generally, you will probably need to improve the wording of some questions and instructions for the respondent, as well as delete or add items. STEP 6: CONDUCT AND MONITOR DATA COLLECTION After you have completed steps 1 through 5, you are ready to begin collecting evaluation information. This process should be carefully monitored. Monitoring will ensure consistency in the data collection process and that everyone adheres to the time intervals established for collecting information from individual participants. As part of the monitoring process, you may want to establish a schedule for submitting completed data collection instruments to the evaluation team. This will ensure that instruments are not lost and that confidentiality is maintained. Completed data collection instruments should be treated confidentially; it is a good idea to have completed forms submitted immediately to a member of the evaluation team. To ensure quality control, information collection staff, particularly if they also are program staff, must be fully educated about the importance of carefully administering and consistently completing evaluation instruments. The first priority of program staff is usually providing services or training to participants, and little effort is given to collecting evaluation information. Encourage your staff to focus on evaluation collection as an important aspect of providing services or training to program participants. ------------------------------------------------------ QUALITY CONTROL PROCEDURES To make sure that information is being collected appropriately, you must also implement quality control procedures. These will have been stated in your evaluation plan and are noted in chapter 5. Quality control is an essential feature of the data collection phase and must be implemented on an ongoing basis throughout the process. Nothing is more damaging to an evaluation effort than information collection instruments that have been incorrectly or inconsistently administered or that are incomplete. ------------------------------------------------------ Once evaluation information is collected, you can begin to analyze it. This process may take place on an ongoing basis or after all data have been collected. The procedures for analyzing and interpreting the evaluation information are discussed in the following chapter. ------------------------------------------------------------------- SAMPLE INFORMED CONSENT FORM [Name of PHA] would like you to participate in the evaluation of [program name]. Your participation is important to us and will help us assess the effectiveness of the program. As a resident of [complex name] we will ask you to [complete a questionnaire, answer questions in an interview, or other tasks]. We will keep all of your answers confidential. Your name will never be included in any reports and none of your answers will be linked to you in any way. The information that you provide will be combined with information from everyone else participating in the study. [If information/data collection includes questions about drug abuse or other illegal activity, the program should make clear its legal obligation to report this information and should let the participant know that confidentiality will be broken in these cases.] You do not have to participate in the evaluation. Even if you agree to participate now, you may stop participating at any time or refuse to answer any question. Refusing to be part of the evaluation will not affect the services you receive in [program name]. If you have any questions about the study, you may call [name and telephone number of evaluator, housing director, or community advocate]. By signing below, you confirm that this form has been explained to you and that you understand it. Please check one: o AGREE TO PARTICIPATE o DO NOT AGREE TO PARTICIPATE Signed: _____________________ Date: _______________________ ------------------------------------------------------ CHAPTER 7 -- ANALYZING EVALUATION INFORMATION Once data collection is over, the next task is to take the information you have and use it to draw conclusions about your program. This is the step in evaluation known as analysis. Analysis frequently involves performing sophisticated statistical procedures, which are almost always the responsibility of a trained evaluator. Unless you have an experienced evaluator on staff, you will need outside assistance in deciding on appropriate statistical analyses (during development of the evaluation plan), setting up a database, and conducting statistical tests. Although a specialist may be responsible for this work during the evaluation, reviewing the material in this chapter will help you ensure that the analysis covers your specific questions and not simply those an outside expert may feel are most important. This chapter briefly shows how information can be analyzed to provide answers about whether a program has attained its objectives and if specific barriers or facilitators have played a part in what happened. Again, it is important to realize that not attaining a particular objective is not a sign of program failure. The evaluation's focus is to provide you with information you need to plan for new initiatives or improve your current program. For a process evaluation, an evaluator or analyst looks at the activities actually carried out by the program and compares them to those initially planned. Two basic steps are involved in a process evaluation: Step 1: Describing what your program did (or what you are currently doing), the people who performed these activities, the number and characteristics of participants, and the schedule of program activities. Step 2: Comparing this information to your initial objectives and determining whether there is a difference between the objectives and actual program implementation. This will help answer the questions: Were program implementation objectives attained? If these objectives were not attained, why not? If it looks as though there are differences between what your program actually did and your early objectives, you can analyze your evaluation information to identify the reasons for the differences. You can also use evaluation information to identify specific barriers encountered and those factors that facilitated implementation. Worksheet 7-1, the "Planned Versus Actual Performance" list at the end of this chapter, uses the example of a resident security patrol activity that was introduced in chapter 4. The list represents an analysis of a specific program's measurable implementation objective -- namely, what the program planned to do. Note that: o Measurable objectives appear first. o Actual implementation information appears second. (You can see that there are differences between objectives and actual implementation for three of the four measurable objectives.) o The third factor notes the presence or absence of differences, and the fourth factor provides the reasons for those differences. o The next two categories identify the barriers and the facilitating factors, respectively. They provide the context for understanding the program and thus will help you interpret the results of your analysis. Outcome evaluation. Analysis of outcome information answers two basic questions: o Did the expected changes occur, such as a decrease in violent acts, or changes in participants' knowledge, attitudes, behavior, or awareness? o If changes occurred, were they the result of your program's activities? Another question that can be included in your analysis of participant outcome information is: o Did some participants change more than others and, if so, what explains this difference? The results of your analysis can be used to answer your initial evaluation questions. o Are program outcome objectives being attained? o If not, why not? o What types of things contributed to attainment of objectives? o What types of things were barriers to attainment of objectives? These questions can be answered by interpreting the results of the statistical procedures performed on the program outcome information. To fully address such questions, however, you will also need to look at the results of the analysis of program implementation information. This will provide a context for interpreting statistical results. In reviewing program implementation information, you may find, for example, that the security foot patrol component of the program was not successfully implemented as intended and that the problem encountered in implementing this component was difficulty in recruiting enough residents who were willing to participate in the patrols. Since the outcome associated with this component was a decrease in violent acts, a less-than-expected change may be attributable to the problems encountered in implementing this objective. UNDERSTANDING STATISTICAL PROCEDURES The purpose of any violence prevention activity is to cause a change -- to decrease violent acts, to increase residents' satisfaction with security, or to enlist the support of community agencies in housing development activities. Evaluation research uses statistical procedures to determine whether a change occurred and whether that change is statistically significant. Generally, a change is considered significant if the probability of it happening by chance is less than 5 in 100 cases. A trained evaluator will perform specific statistical tests, using standard formulas, on the data to determine whether a result is significant. For example, if your evaluation finds a 35 percent decrease in assaults in the housing development, you can intuitively assume this is significant. However, suppose the decrease is 6 percent -- would you consider this to be significant? Statistical procedures can provide answers in this type of situation. Statistical tests are also used to determine the relationships between variables (factors) in an analysis. Statistical tests usually have a dependent variable and one or more independent variables. Dependent variables are essentially your performance measures -- what it is that you expect to change because of your program activities. For example, a change in the number of violent acts occurring on housing development property, a change in the level of resident satisfaction with housing development security, or a change in the amount of vandalism can all be dependent variables. Independent variables are factors that you believe may cause the dependent variables to change. Your program activities are independent variables. You expect these activities to cause a change; that is, to prevent or decrease violent acts. Resident characteristics can also be considered independent variables because in some circumstances the age, sex, or ethnicity of the residents may influence behavior. For example, you might expect a larger decrease in violent activity among teenagers who had been exposed to a violence prevention activity at an earlier age than those who were exposed to it after their behavior had become more fixed. A statistical test assesses the relationship between a dependent variable and one or more independent variables. Such a test can tell you whether the measures of the dependent variable (for example, the level of resident satisfaction with security) vary as a function of the independent variable. For example, are older residents more/less satisfied with housing development security than younger residents? Are female residents more/less satisfied with housing development security than male residents? Are younger male teenagers more/less likely to participate in a housing development recreation activity than older male teenagers? The more independent variables you include in your statistical analysis, the more you will understand about your program's effectiveness. Lack of a significant change among your participants as a group does not necessarily rule out program effectiveness. If you include the independent variable of age in your analysis, you may find that older single head-of-household mothers (ages 25 to 35) demonstrated significant differences in before-and-after program scores but younger single head-of-household mothers (ages 17 to 24) did not. This would help you understand that your program's activities are effective for the older single head-of-household mothers in your target population, but not for the younger ones. You may then want to implement different types of interventions for the younger mothers of the housing development, or you may want to limit your program recruitment to older mothers, who seem to benefit from what you are doing. USING THE RESULTS OF AN ANALYSIS Statistical procedures can provide hard information to help in making decisions, but the final decision about increasing or decreasing violence prevention activity levels remains a subjective one, based on your knowledge and understanding of your crime control program's circumstances. For example, simply knowing that there was a five percent decrease in assaults is itself a valuable piece of information. With it, you can make a decision whether you want to increase prevention efforts in the area or whether this decrease -- assuming it continues --is acceptable. Findings from the study that identified the need for this manual suggest that very few violence prevention initiatives have been evaluated and shown to be effective. Therefore, in the field of public housing, you do not have access to reliable information on what does and does not work. Evaluation findings in this area will provide valuable knowledge in the field. Ideally, once you have your data, analysis will be an ongoing task in an evaluation so that you are producing information you can use throughout the process. Your evaluation report schedule will depend on your needs, on the timeframe for the evaluation, and on the program's duration. Reports should present results that show statistically significant changes as well as those that do not show statistical significance. The next chapter has more information on writing an evaluation report and disseminating the results of your evaluation. ------------------------------------------------------ WORKSHEET 7-1: PLANNED VERSUS ACTUAL PERFORMANCE Short-Term Performance Measures: 1. Objective -- Meet with local police in first month Actual Performance -- Met with police during second week Difference? -- No Reason for Change -- Barriers Encountered -- None, although some difficulty in scheduling mutually convenient time Facilitating Factors -- Police supportive of effort 2. Objective -- Meet with resident council Actual Performance -- Met with council during third week Difference? -- No Reason for Change -- Barriers Encountered -- Council members were planning a major event; had little time initially Facilitating Factors -- Council eager to participate, recognized problem 3. Objective -- Announce plan to residents Actual Performance -- Announced plan during fourth week Difference? -- No Reason for Change -- None listed Barriers Encountered -- None Facilitating Factors -- None 4. Objective -- Recruit 20 volunteers in first 2 months Actual Performance -- Recruited nine volunteers in first 2 months Difference? -- Yes Reason for Change -- Residents reluctant to volunteer for security patrols Barriers Encountered -- Residents' perceived danger of joining patrol; resident council not actively involved initially Facilitating Factors -- Resident council involvement increased, resulting in more volunteers 5. Objective -- Train first 12 volunteers within 2 weeks of recruitment Actual Performance -- Trained nine volunteers Difference? -- No Reason for Change -- Only nine residents volunteered during first 2 months Barriers Encountered -- No barriers to training; only lack of volunteers Facilitating Factors -- Police cooperation vital to success of training program, allayed residents' fears 6. Objective -- Have four three-person teams within 2 months of initiating activity Actual Performance -- Had three three-person teams Difference? -- No Reason for Change -- Lack of volunteers Barriers Encountered -- No barriers to training; only lack of volunteers Facilitating Factors -- Police cooperation vital to success of training program, allayed residents' fears Long-Term Performance Measures 1. Objective -- Recruit 30 volunteers within first 9 months Actual Performance -- Recruited 24 volunteers in first 9 months Difference? -- Yes Reason for Change -- Although full quota not met, recruitment increased Barriers Encountered -- Primarily due to residents' not having child care during patrol hours Facilitating Factors -- Police involvement in training emphasized safety of patrol team members 2. Objective -- Have seven three-person teams within 9 months of initiating activity Actual Performance -- Had six three-person teams Difference? -- Yes Reason for Change -- Although full quota not met, recruitment increased Barriers Encountered -- Primarily due to residents' not having child care during patrol hours Facilitating Factors -- Police involvement in training emphasized safety of patrol team members 3. Objective -- Meet monthly with local police and resident council Actual Performance -- Met monthly with police and council Difference? -- No Reason for Change -- Barriers Encountered -- Difficulty finding mutually convenient time for all parties Facilitating Factors -- Police made the project a priority Overall Objective -- Decrease assaults, rapes, robberies, and murders by 10 percent after 2 months' operation. Actual Performance -- Decreases: assaults: 6 percent; rapes: 14 percent; robberies: no change; murders: 20 percent. ------------------------------------------------------ CHAPTER 8 -- REPORTING YOUR FINDINGS A program evaluation report is an important document. It integrates what you have learned from the evaluation of your initiative. There are different ways of reporting this information, depending on how you want to use the report and on your audience. A program evaluation report can do the following: o Guide management decisions by identifying areas in which changes may be needed for future implementation. o Tell the story of implementing your initiative and show the impact on residents. o Advocate your initiative to potential funders or to other agencies in the community. o Improve violence prevention efforts in public housing. These uses suggest that there are various audiences for an evaluation report. These audiences can include staff and Public Housing Agency (PHA) administrators, current and potential funding sources, other PHAs, and local and national advocacy organizations. Whatever the type of report you plan to develop, it is critical to include statistically nonsignificant analysis results as well as statistically significant ones, because there is as much to learn from program approaches or models that do not work and why you surmise that they don't work as there is from those approaches that do appear to work. Nonsignificant results should not be thought of as failures. Efforts to change knowledge, attitudes, and behaviors through programmatic interventions are not always going to work. Currently, so little is known about what does and does not work that any information on violence prevention in public housing will greatly increase knowledge in the field. PREPARING AN EVALUATION REPORT FOR PROGRAM FUNDERS The report to program funders will probably be the most comprehensive one you prepare. Often funders will use your report to demonstrate the effectiveness of their grant initiatives and to support allocation of additional funds for similar program-related efforts. It is as important to your funding sources to show that your program is important and worthwhile (and a good use of their money) as it is for you to demonstrate that your program works. A report that is useful for this purpose will include detailed information about the program, the evaluation design and methods, and the types of data analyses conducted. A sample outline of an evaluation report for program funders is shown below. The outline is developed as a final report and assumes all the information collected on your program has been analyzed. However, this outline may also be used for interim reports, with different sections completed at various times during the evaluation and feedback provided to program personnel on the ongoing status of the evaluation. ------------------------------------------------------ SAMPLE OUTLINE Final Evaluation Report I. Introduction: General Description of the Initiative (approximately one page long) A. Description of initiative components, including services or training delivered and target population for each component. B. Description of collaborative efforts (if relevant), including the agencies participating in the collaboration and their various roles and responsibilities in the initiative. C. Description of strategies for recruiting residents (if relevant). D. Description of special issues relevant to serving the residents and plans to address them. 1. Agency and staffing issues. 2. Residents' cultural backgrounds, socioeconomic status, literacy levels, and so forth. II. Evaluation of Implementation Objectives A. Description of implementation objectives (measurable objectives). 1. What you planned to do (planned services/interventions/ training/education; duration and intensity of each service/ intervention/training period). 2. Who you planned to have do it (planned staffing arrangements and qualifications/characteristics of staff). 3. Target population (intended characteristics and number of members of the target population to be reached by each service/intervention/training/education effort and how you plan to recruit residents). 4. Description of the objectives for collaborating with community agencies. a. Planned collaborative arrangements. b. Services/interventions/training provided by collaborating agencies. B. Statement of evaluation questions (Were program implementation objectives attained? If not, why not? What were the barriers to and facilitators of attaining implementation objectives?). Examples: 1. How successful was the program in attaining its objective of implementing an afterschool program for resident youths? What were the policies, practices, and procedures used to attain this objective? What were the barriers and facilitators to attaining this objective? 2. How successful was the program in recruiting the intended target population and serving the expected number of participants? What were the policies, practices, and procedures used to recruit and maintain participants in the program? What were the barriers and facilitators to attaining this objective? 3. How successful was the program in attaining its objective with respect to establishing collaborative relationships with other agencies in the community? What were the policies, practices, and procedures used to attain this objective? What were the barriers and facilitators to attaining this objective? C. Description of data collection methods and data collected for each evaluation question. 1. Description of data collected. 2. Description of methodology of data collection. 3. Description of data sources (such as documents, staff, residents, and collaborating agency staff). D. Description of data analysis procedures. E. Description of results of analysis. 1. Statement of findings with respect to each evaluation question. Examples: a. The program's success in attaining the objective. b. The effectiveness of particular policies, practices, and procedures in attaining the objective. c. The barriers and facilitators to attaining the objective. 2. Statement of issues that may have affected the evaluation's findings. Examples: a. The need to make changes in the evaluation because of changes in program implementation or characteristics of the residents served. b. Staff turnover during the research, resulting in inconsistent data collection procedures. c. Changes in evaluation staff. III. Evaluation of Outcome Objectives A. Description of outcome objectives (in measurable terms). 1. What changes were residents expected to exhibit as a result of their participation in each service/intervention/ training module provided by the program? 2. What changes were residents expected to exhibit as a result of participation in the program in general? 3. What changes were expected to occur in the community? B. Statement of evaluation questions, evaluation design, and method for assessing change for each question. Examples: 1. How effective was the program in attaining its expected outcome of decreasing youth gang involvement? How was this measured? What design was used to establish that a change occurred and to relate change to the program's interventions (such as pre- and post- intervention, control groups, and comparison groups)? Why was this design selected? 2. How effective was the program in attaining its expected outcome of increasing youths' self-esteem? How was this measured? What design was used to establish that a change occurred and to relate the change to the interventions? Why was this design selected? 3. How effective was the program in increasing the knowledge and skills of participants? How was this measured? What design was used to establish that a change occurred and to relate the change to the interventions? Why was this design selected? C. Discussion of data collection methodologies for each evaluation question. 1. Discussion of data collected. 2. Discussion of methodology of data collection. Examples: a. Case record reviews. b. Interviews. c. Self-report questionnaires or inventories. [If you developed an instrument for this evaluation, attach a copy to the final report.] d. Observations. 3. Data sources for each evaluation question, and sampling plans when relevant. D. Discussion of issues that affected the outcome evaluation and how they were addressed. 1. Program-related issues. a. Staff turnover. b. Changes in target population characteristics. c. Changes in services/interventions during the course of the program. d. Changes in staffing plans. e. Changes in collaborative arrangements. f. Characteristics of participants. 2. Evaluation-related issues. a. Problems encountered in obtaining participant consent. b. Change in numbers of participants served, requiring change in analysis plans. c. Questionable cultural relevance of evaluation data collection instruments and/or procedures. E. Data analysis procedures. 1. Summary of procedures. 2. Distributions of program characteristics and participant (resident) characteristics. 3. Descriptive measures (for example, averages or most commonly occurring responses). 4. Cross tabulations (for example, outcome measures by various participant groups). F. Results of data analysis. 1. Present statistically significant and nonsignificant analysis results (including statement of established level of significance) for each outcome evaluation question. 2. Discuss any issues or problems relevant to the analysis. Examples: a. Issues relevant to data collection procedures, particularly consistency in methods and consistency among data collectors. b. Issues relevant to the number of participants served by the program and those included in the analysis. c. Missing data or differences in sizes of samples for the analysis. G. Discussion of results. 1. Provide an interpretation of results for each evaluation question, including any explanatory information from the process evaluation. a. The effectiveness of the program in attaining a specific outcome objective. b. Variables associated with attaining specific outcomes, such as characteristics of the population, characteristics of the service provider or trainer, duration and/or intensity of services or training, and characteristics of the service or training. 2. Discuss any issues relevant to the interpretation of results. IV. Integration of Process and Outcome Evaluation Information A. Summary of process evaluation results. B. Summary of outcome evaluation results. C. Discussion of potential relationships between implementation and outcome evaluation results. Examples: 1. Did particular policies, practices, or procedures used to attain program implementation objectives have differing impacts on participant outcomes? 2. How did practices and procedures used to recruit and maintain participants in services affect participant outcomes? 3. What collaboration practices and procedures were found to be related to attaining expected community outcomes? 4. Were particular training modules more effective than others in attaining expected outcomes for participants? If so, what were the features of these modules that may have contributed to their effectiveness (such as characteristics of the trainers, characteristics of the curriculum, and the duration and intensity of the services)? V. Recommendations to Program Administrators for Future Program and Evaluation Efforts ------------------------------------------------------ PREPARING AN EVALUATION REPORT FOR STAFF AND PHA PERSONNEL An evaluation report for staff and Public Housing Agency (PHA) personnel may be used to support management decisions about on- going or future violence prevention efforts. This type of report may not need to include as much detail on the evaluation methodology but might focus instead on findings. The report could include the information noted in the sample outline of the final evaluation report, including information in sections II. E. (description of results of analysis of implementation information), III. D. (discussion of issues that affected the outcome evaluation and how they were addressed), III. F. (results of data analysis on outcome information), III. G. (discussion of results), and IV. C. (discussion of potential relationships between implementation and outcome evaluation results). Final reports should be accompanied by an executive summary of one to five pages that summarizes the key evaluation methods and results, so that readers will not have to review all of the details of the report if they do not have the time. DISSEMINATING THE RESULTS OF YOUR EVALUATION In addition to producing formal evaluation reports, you may want to take advantage of other opportunities to share what you have learned with people in your community or with the field in general. You might want to consider drafting letters to community organizations that may be interested in the activities and results of your work. Other ways to let people know what you have done include the following: o Producing press releases and articles for local professional publications such as newsletters and journals. o Making presentations on the results of your program at meetings at the local police department, university, public library, or other settings. o Listing your evaluation report or other evaluation-related publications in relevant databases, on electronic bulletin boards, and with clearinghouses. o Making phone calls and scheduling meetings with similar programs to share your experience and results. Many of the materials listed in the resources section of this manual contain ideas and guidelines for producing different types of informational products related to evaluations. ------------------------------------------------------ GLOSSARY baseline data -- Initial information on residents, the housing complex, or other program aspects collected prior to receipt of services or participation in program activities. Baseline data are often gathered through intake interviews and observations and are used later for comparing measures that determine changes in your residents, program, or environment. comparison group -- A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your residents or program participants. These individuals may not receive any services, or they may receive a different set of services, activities, or products; in no instance do they receive the same services as those you are evaluating. As part of the evaluation process, the experimental group (or those receiving your services) and the comparison group are assessed to determine which types of services, activities, or products provided by your program produced the expected changes. confidentiality form -- A written form that assures evaluation participants that information provided will not be openly disclosed nor associated with them by name. Since an evaluation may entail exchanging or gathering privileged or sensitive information about residents or other individuals, a confidentiality form ensures that the participants' privacy will be maintained. consultant -- An individual who provides expert or professional advice or services, often in a paid capacity. control group -- A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your residents or program participants but who do not receive the program services, products, or activities you are evaluating. Participants are randomly assigned to either the experimental group (those receiving your services) or the control group. A control group is used to assess the effect of your program activities on residents or participants compared to similar individuals who are receiving the services, products, or activities you are evaluating. The same information is collected for people in the control group and those in the experimental group. cultural relevance -- Demonstration that evaluation methods, procedures, and/or instruments are appropriate for the culture(s) to which they are applied. (Other terms include cultural competency and cultural sensitivity.) culture -- The shared values, traditions, norms, customs, arts, history, institutions, and experience of a group of people. The group may be identified by race, age, ethnicity, language, national origin, religion, or other social categories or groupings. data -- Specific information or facts that are collected. A data item is usually a discrete or single measure. Examples of data items include age, date of entry into program, or reading level. Sources of data include case records, attendance records, referrals, assessments, and interviews. data analysis -- The process of systematically applying statistical and logical techniques to describe, summarize, and compare data collected. data collection instruments -- Forms used to collect information for your evaluation. Forms may include interview instruments, intake forms, case logs, and attendance records. They may be developed specifically for your evaluation or modified from existing instruments. A professional evaluator can help select those that are most appropriate for your program. data collection plan -- A written document describing the specific procedures to be used to gather the evaluation information or data. The document describes who collects the information, when and where it is collected, and how it is to be obtained. database -- A collection of information that has been systematically organized for easy access and analysis. Databases typically are computerized. design -- The overall plan for a particular evaluation. The design describes how you plan to measure program performance and includes your performance indicators. evaluation -- A systematic method for collecting, analyzing, and using information to answer basic questions about your program. It helps to identify effective and ineffective services, practices, and approaches. evaluation plan -- A written document describing the overall approach or design you anticipate using to guide your evaluation. It includes what you plan to do, how you plan to do it, who will do it, when it will be done, and why the evaluation is being conducted. The evaluation plan serves as a guide for the evaluation. evaluation team -- The individuals, such as the outside evaluator, evaluation consultant, public housing administrator, and staff, who participate in planning and conducting the evaluation. Team members assist in developing the evaluation design, developing data collection instruments, collecting data, analyzing data, and writing the report. evaluator -- An individual trained and experienced in designing and conducting an evaluation who uses tested and accepted research methodologies. experimental group -- A group of residents or individuals participating in the activities or receiving the services being evaluated or studied. Experimental groups (also known as treatment groups) are usually compared to a control or comparison group. focus group -- A group of 7 to 10 people convened for the purpose of obtaining perceptions or opinions, suggesting ideas, or recommending actions. A focus group is a method of collecting information for evaluation purposes. formative evaluation -- A type of process evaluation of new programs or services that focuses on collecting data on program operations so that needed changes or modifications can be made to the program in its early stages. Formative evaluations are used to provide feedback to staff about the program components that are working and those that need to be changed. immediate outcomes -- The changes in program participants' (for example, residents') knowledge, attitudes, and behavior that occur early in the course of the program activities. These changes may occur at certain times during these activities. For example, acknowledging gang involvement is an immediate outcome. impact evaluation -- A type of outcome evaluation that focuses on the broad, long-term impacts or results of program activities. For example, an impact evaluation could show that a decrease in a community's crime rate is the direct result of a program designed to provide community policing. informed consent -- A written agreement by program participants (for example, residents) to voluntarily participate in an evaluation or study after having been advised of the purpose of the study, the type of information being collected, and how the information will be used. A sample informed consent form appears in chapter 6. instrument -- A tool used to collect and organize information. Examples include written measures such as questionnaires, scales, and tests. intermediate outcomes -- Results or outcomes of program activities that may require some time before they are realized. For example, participation in an afterschool sports program would be an intermediate outcome of program activities designed to prevent at- risk youths from involvement in gangs. internal resources -- An agency's or organization's resources, including staff skills and experiences and any information already available through current program activities. management information system (MIS) -- An information collection and analysis system, usually computerized, that facilitates access to program and participant (for example, resident) information. It is usually designed and used for administrative purposes. The types of information typically included in an MIS are resident sociodemographic information, resident contacts, referrals, and program activities. measurable terms -- Specification, through clear language, what it is you plan to do and how you plan to do it. Stating time periods for activities, dosage or frequency information (such as three 1- hour training sessions), and number of residents helps to make project activities measurable. methodology -- The way in which you find out information; a methodology describes how something will be (or was) done. The methodology includes the methods, procedures, and techniques used to collect and analyze information. monitoring -- The process of reviewing a program or activity to determine whether set standards or requirements are being met. Unlike evaluation, monitoring compares a program to an ideal or exact state. objective -- A specific statement that explains how a program goal will be accomplished. For example, an objective of the goal to improve resident safety could be to implement resident patrols on a daily basis for 6 months. An objective is stated so that changes, in this case an increase in a specific type of knowledge, can be measured and analyzed. Objectives are written using measurable terms and are time limited. outcomes -- Outcomes are the results of the program, services, or activities you provide, and they refer to changes in knowledge, attitudes, or behavior in participants. They are referred to as participant outcomes in this manual. outcome evaluation -- Evaluation designed to assess the extent to which a program affects residents according to specific performance indicators. These results are expected to be caused by program activities and are tested by comparison of results among sample groups in the target population. Outcome evaluation includes performance measurement; it is also known as impact and summative evaluation. outcome objectives -- The changes in residents' knowledge, attitudes, awareness, or behavior that you expect to occur as a result of implementing your program components, services, or activities. Also known as participant outcome objectives. outside evaluator -- An evaluator not affiliated with your housing agency prior to the program evaluation. Also known as a third-party evaluator. participant -- A resident, family, complex, neighborhood, or community receiving or participating in services provided by your pro-gram. Also known as a client or target population group. performance indicator -- An indicator against which to measure the extent that individual services or activities reached their intended goals. For example, after implementing a resident safety patrol, you may assume that the number of police calls will be reduced by 35 percent. performance measurement -- The process of setting measurable goals and benchmarks (performance indicators) by which the effectiveness of program activities is determined. pilot test -- Preliminary test or study of your program or evaluation activities to try out procedures and make any needed changes or adjustments. For example, an agency may pilot test new data collection instruments that were developed for the evaluation. posttest -- A test or measurement taken after services or activities have taken place. It is compared with the results of a pretest to show evidence of the effects or changes as a result of the services or activities being evaluated. pretest -- A test or measurement taken before services or activities begin. It is compared with the results of a posttest to show evidence of the effects of the services or activities being evaluated. A pretest can be used to obtain baseline data. process evaluation -- An evaluation that examines the extent to which activities are operating as intended by assessing ongoing program operations and whether members of the targeted population (the residents) are being served. A process evaluation involves collecting data that describes program operations in detail, including the types and levels of services provided, the location of service delivery, staffing, sociodemographic characteristics of residents, information about the community in which services are being provided, and the linkages with collaborating agencies. A process evaluation helps program staff identify needed activities and/or change program components to improve service delivery. It is also called formative or implementation evaluation. program implementation objectives -- What you plan to do in your program, components, or services. For example, providing security patrols in five buildings, three times each evening, is referred to as a program implementation objective. qualitative data -- Information that is difficult to measure, count, or express in numerical terms. For example, how safe a resident feels in his or her apartment is qualitative data. quantitative data -- Information that can be expressed in numerical terms, counted, or compared on a scale (for example, the number of 911 calls in a given month). random assignment -- The assignment of individuals in the pool of all potential participants to either the experimental (treatment) group or the control group in such a manner that their assignment to a group is determined entirely by chance. reliability -- Extent to which a measurement (such as a questionnaire or a data collection procedure) produces consistent results over repeated observations or administrations of the instrument under the same conditions each time. It is also important that reliability be maintained among data collectors. sample -- A subset of participants (for example, residents) selected from the total study population. Samples can be random (selected by chance, such as every sixth apartment) or nonrandom (selected purposefully, such as all 16-year-olds in a conflict resolution program). standardized instruments -- Assessments, inventories, questionnaires, or interviews that have been tested with a large number of individuals and are designed to be administered to program participants in a consistent manner. Results of tests with program participants can be compared to reported results of the tests used with other populations. statistical procedures -- The set of standards and rules based in statistical theory by which one can describe and evaluate what has occurred. statistical test -- Type of statistical procedure that is applied to data to determine whether the results are statistically significant (that is, the outcome is not likely to have resulted by chance alone). summative evaluation -- A type of outcome evaluation that assesses the results or outcomes of a program. This type of evaluation is concerned with a program's overall effectiveness. validity -- The extent to which a measurement instrument or test accurately measures what it is supposed to measure. For example, a reading test is a valid measure of reading skills, but it is not a valid measure of total language competency. variables -- Specific characteristics or attributes, such as behaviors, age, or test scores, that are expected to change or vary. For example, the level of adolescent drug use after being exposed to a drug prevention program is one variable that may be examined in an evaluation. ------------------------------------------------------ EVALUATION RESOURCES Evaluation Studies Currently Underway Although very few evaluation studies have been conducted in the area of violence prevention in public housing or similar environments, there are several evaluation studies under way. These evaluations have not yet been completed, and final results will not be available for the next several years. We found that many of these evaluations are in the third year of a 5-year evaluation process; therefore, findings are not yet available. This section provides information on several of these violence prevention evaluations. Funding for the majority of these studies was provided by several Federal agencies, including the Department of Housing and Urban Development, National Institutes of Health, Center for Substance Abuse Prevention, Department of Health and Human Services, and the National Institute of Justice. These particular evaluation studies are being referenced because of their focus on preventing violence in public housing or similar environments. Listed below is a brief description of each of these studies, including the funding source, location of the evaluation, and the current stage of the evaluation. Center for Substance Abuse Prevention (CSAP) Community Partnership Projects 5600 Fishers Lane Rockwall 2 Building, 9th Floor Rockville, MD 20857 Some of the projects funded by the Center for Substance Abuse Prevention (CSAP) through the community partnership projects focus on violence prevention. Projects being evaluated are located throughout the United States. These projects are required to include an evaluation component, and each funded project has its own evaluator. CSAP does not currently have information on all of the projects that specifically focus on violence prevention. CSAP will, however, conduct a cross-site evaluation of the community partnership projects and is currently in the process of completing a secondary analysis of data for these projects. For additional information, call (301) 443-0365. Crime, Fear, and Disorder Reduction Program -- Project ROAR (Reclaiming Our Area Residents) Spokane, Washington This project is being funded with public housing drug elimination pro-gram grant funds. Data for this evaluation are being obtained through interviews with residents, surveys with city residents, a physical inventory of the neighborhoods, and arrest data. The outcome measures include physical changes in the areas in the last 6 months, social changes that have occurred, satisfaction with the neighborhoods, and changes in feelings of safety during both day and night. For more information contact: Edmund McGarrell Director, Crime Control Policy Program Hudson Institute Associate Professor Department of Criminal Justice Indiana University Bloomington, Indiana 47405 mcgarrel@indiana.edu Evaluation of the Chicago Housing Authority's Anti-Drug Initiative: A Model of Comprehensive Crime Prevention in Public Housing The Chicago Housing Authority's (CHA's) Anti-Drug Initiative began in 1988 in response to the severe problems with drugs and crime in its developments. This initiative is a collaborative effort that includes the Chicago police, CHA police, security guards, CHA management, social service providers, and residents. Community crime prevention, situational crime prevention, law enforcement interventions, and drug prevention and treatment are the four interventions being used as part of CHA's anti-crime efforts. Abt Associates, Inc., is currently conducting an evaluation of the impact of the Anti-Drug Initiative in a sample of nine buildings in three CHA developments. The evaluation methodology includes resident surveys of the buildings and in-depth interviews with key resident leaders and staff. A comprehensive evaluation of the Anti- Drug Initiative is currently being conducted by Abt Associates, Inc. When the evaluation is completed, a final report will be prepared. Funding for this evaluation is being provided by a National Institute of Justice (NIJ) grant. For more information contact: Susan J. Popkin Associate Abt Associates, Inc. Bethesda, Maryland (301) 913-0653 The Huntsville Organizing Project Huntsville, Alabama Institute for Social Science Research University of Alabama, Tuscaloosa, Alabama The Huntsville Organizing Project is a 5-year intervention funded by the National Institute for Child Health and Human Development (National Institutes for Health). This community-empowerment project is currently in its third year of 5 years of funding. The project is being conducted in three public housing developments in Huntsville, Alabama, and addresses the issues of youth violence and risky sexual behavior by involving communities in self-help activities. The goal of the project is to speed up the diffusion process for healthy behaviors in six public housing neighborhoods (healthy behaviors are operationalized as decreases in violence, sexually transmitted diseases, and unintended teenage pregnancies). Additional information can be obtained by contacting: Susan Newcomer, Ph.D. National Institute for Child Health and Human Development (301) 496-1174 Mentoring and Rites of Passage: Chicago, Illinois Centers for Disease Control and Prevention (CDC) National Center for Injury Prevention and Control Violence Prevention Division (404) 639-3311 The Centers for Disease Control and Prevention (CDC) have funded 13 projects in the area of youth interpersonal violence. Each of these projects is currently ongoing and utilizes a different strategy. These projects were funded for 1- to 3-year periods. The setting for one of the projects, "Mentoring and Rites of Passage," is a large urban housing development. The partners for this project include the City of Chicago, Department of Health, in collaboration with the Robert Taylor Homes Network, Cook County Bureau of Health Services, and Northeastern Illinois University. The project is designed to reduce violent behaviors and injuries among residents 8 to 18 years of age in Robert Taylor Homes, the largest public housing development in the country. Youths will be provided with adult mentors and a Rites of Passage program, which is a series of activities designed to assist adolescents in their transition into adulthood. Mentors have received about 100 hours of training and counseling to aid in the implementation of a curriculum that focuses on self-concept, sexual identity and awareness, improved skills in communication and decisionmaking, and an appreciation of cultural heritage. The mentors meet with groups of 10 to 15 youths of similar ages at least twice a week. While the mentoring and Rites of Passage programs occur, antiviolence messages and improvements in services are provided to the entire housing development. The mentoring and Rites of Passage programs will be provided to approximately 360 youths selected from 4 housing units. The same number of young people from another four housing units will serve as the comparison group. Every 6 months the groups will be compared in terms of their interpretation of standard social interactions and situations, self-reported violent behavior and self-concept, hospital visits related to violence, and calls to the police about violent events in the housing development. For more information contact: Breckinridge Church, Ph.D. (312) 794-2568 Problem-Oriented Policing (POP) in Public Housing New Jersey POP Project Jersey City, New Jersey Evaluator: University of Cincinnati Cincinnati, Ohio This study, sponsored by the National Institute of Justice, is being conducted in 6 public housing sites which are located in the top 15 crime spots in the city. Site teams are conducting a process analysis, which includes monthly site team meetings for documentation of problems, analyses, responses, and assessments. Some of the outcome measures will include a survey of a random sample of 300 residents before and after the problem-solving efforts, as well as lease records for the sample and social observations of common areas every 3 months. For more information contact: Lorraine Green Mazerolle, Ph.D. Division of Criminal Justice University of Cincinnati 600 Dyer Hall Cincinnati, Ohio 45221 (513) 556-5830 ONLINE RESOURCES There are many useful resources pertaining to violence and crime prevention available on the Internet. These resources include crime/ violence statistics and databases, articles about crime and violence, descriptions of crime/violence prevention programs, and evaluations of crime/violence prevention programs. Due to the vastness of the Internet, it is impossible to identify all possible web, gopher, telnet, and ftp sites and bulletin board systems that might provide useful information concerning crime and violence prevention, but the following is a brief listing of those resources most frequently used in the research conducted for KRA's study. Bureau of Justice Assistance (BJA) Web Site http://www.ojp.usdoj.gov/BJA/ The Bureau of Justice Assistance (BJA) supports crime prevention programs at the State and local government levels. It serves as a source of information for technical assistance and evaluation of such programs, and many of its publications concerning technical assistance are available at its web site. The web site also provides links to other Internet sites that contain information relevant to criminal justice and justice programs. Bureau of Justice Statistics (BJS) Web Site http://www.ojp.usdoj.gov/bjs/ The Bureau of Justice Statistics (BJS) is a source of statistical information about the justice system, crimes, and criminals. Its web site provides access to some of BJS's statistical sources. The web site does not provide information regarding crime prevention programs or evaluation of such programs, but it does list links to other sources of criminal justice statistics. National Criminal Justice Reference Service (NCJRS) Web Site http://www.ncjrs.org/ The National Criminal Justice Reference Service (NCJRS) provides information on criminal and juvenile justice systems and supports all bureaus of the U.S. Department of Justice, Office of Justice Programs, including the National Institute of Justice (NIJ), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), the Bureau of Justice Statistics (BJS), and the Bureau of Justice Assistance (BJA). The web site provides access to criminal justice statistics; articles about crime and crime prevention, including programs designed to prevent crime; and links to other web, gopher, telnet, and ftp sites and bulletin boards containing similar information, including links to NIJ and OJJDP web sites. PAVNet Online Web Site http://www.pavnet.org/ Gopher Site gopher://cyfer.esusda.gov:70/11/violence PAVNet Online was established by Partnerships Against Violence (PAV) as an Internet resource to provide information about effective violence prevention initiatives. PAVNet partners include the Department of Labor, the Department of Agriculture, the Department of Education, HUD, the Department of Justice, the Department of Health and Human Services, and the Department of Defense. PAVNet Online provides online access to documents describing programs, sources of funding, and technical assistance; these documents can be searched using an online search engine. In addition, PAVNet provides links to other Internet resources pertaining to violence and criminal justice. PREVLine Telnet Site telnet://ncadi.health.org Bulletin Board Phone Number (301) 770-0850 PREVLine is a service of the National Clearinghouse for Alcohol and Drug Information (NCADI) and is primarily a resource for substance abuse and prevention. However, it also provides access to its violence prevention resource collection (VPRC). The VPRC can be searched online for specific terms and provides program examples, research finds, and contact information for networking. U.S. Department of Housing and Urban Development (HUD) Web Site http://www.hud.gov/ The U.S. Department of Housing and Urban Development's (HUD's) web site provides access to many different resources that are useful for planning, implementing, and maintaining a community. Included at the site are documents about successful communities, sources of funding, technical assistance, and links to other web sites that provide housing and community-related information. One of the links provided at HUD's web site is a link to HUD USER (described below). HUD USER Web Site http://www.huduser.gov/ HUD USER is the research site of HUD. Its web site provides access to the HUD USER database, which contains over 6,000 reports, articles, case studies, and other research literature on topics related to housing and community development. The HUD USER web site also contains links to other housing and community development resources avail-able on the Internet, lists of HUD publications, and access to two other HUD Office of Policy Development and Research clearinghouses. U.S. Department of Justice (DOJ) Web Site http://justice2.usdoj.gov/ The U.S. Department of Justice (DOJ) is the agency of the executive branch of the U.S. Government that oversees the national criminal justice system. Its web site serves mainly as a starting point for exploring the Internet for crime and criminal justice information and, in this capacity, provides links to other web sites that contain crime and criminal justice information. CLEARINGHOUSES Center for Substance Abuse Prevention (CSAP) National Clearinghouse for Alcohol and Drug Information (NCADI) P.O. Box 2345 Rockville, MD 20847-2345 (301) 468-2600 TDD: (30l) 230-2687 Fax: (301) 468-6433 1-800-729-6686 Center for Substance Abuse Prevention (CSAP) National Resource Center for Prevention of Perinatal Abuse of Alcohol and Other Drugs 9300 Lee Highway Fairfax, VA 22031 (703) 218-5600 Fax: (703) 218-5701 1-800-218-5701 Juvenile Justice Clearinghouse Box 6000 Rockville, MD 20850 Fax: (301) 251-5212 1-800-638-8736 National Clearinghouse on Families and Youth (NCFY) P.O. Box 13505 Silver Spring, MD 20911-3505 (301) 608-8098 Fax: (301) 587-4352 Work and Family Clearinghouse U.S. Department of Labor Women's Bureau 200 Constitution Avenue NW. Washington, DC 20210-0002 (202) 523-4486 Fax: (202) 523-1529 1-800-827-5335 ORGANIZATIONS The American Criminal Justice Association P.O. Box 61047 Sacramento, CA 95860 (916) 484-6553 The American Society of Criminology 1314 Kinnear Rd Suite 212 Columbus, OH 43212 (614) 292-9207 National Center for Juvenile Justice 701 Forbes Avenue Pittsburgh, PA 15219 (412) 227-6950 The National Center for Juvenile Justice is responsible for collecting juvenile court statistics and maintaining a comprehensive database on the juvenile justice system. The center conducts social competence research, comparative analyses of juvenile and family codes, and program evaluations. The center also assesses juvenile justice services, designs programs and facilities, and provides consultations on automated information and reporting systems. SOURCES OF INFORMATION COLLECTION INSTRUMENTS AND MEASURES Center for the Study and Prevention of Violence (CSPV) University of Colorado at Boulder Institute of Behavioral Science Campus Box 442 Boulder, CO 80309-0442 Clinical Psychometric Research, Inc. P.O. Box 619 Riderwood, MD 21139 1-800-245-0277 Consulting Psychologists Press, Inc. P.O. Box l0096 Palo Alto, CA 94303-0979 1-800-624-1765, ext. 300 Fax: (415) 969-8608 Educational and Industrial Testing Service P.O. Box 7234 San Diego, CA 92107 (619) 222-1666 Psychological Assessment Resources, Inc. P.O. Box 998 Odessa, Florida 33556-9901 1-800-331-TEST The Psychological Corporation Order Services Center P.O. Box 839954 San Antonio, TX 78283-3954 Fax: (512) 299-2722 1-800-228-0752 Publishers Test Service P.O. Box 150 Monterey, CA 93942-0150 (408) 649-8400 Fax: (408) 649-7644 SAGE Publications, Inc. P.O. Box 5084 Thousand Oaks, CA 91359-9924 (805) 499-9774 Fax: (805) 499-0871 Western Psychological Services 12301 Wilshire Boulevard Los Angeles, CA 90025 (213) 478-2061 Fax: (213) 478-7838 1-800-648-8857 EVALUATION CONSULTANTS American Evaluation Association University of Virginia Ruffner Hall 405 Emmet Street Charlottesville, VA 22903 (804) 924-0511 Contact: Sandy Sherman American Sociological Association 1722 N Street NW. Washington, DC 20036 (202) 833-3410 Contact: APAC Director Child Welfare League of America 440 First Street NW., Suite 310 Washington, DC 20001-2085 (202) 638-2952 Contact: Patrick Curtis, Ph.D. National Executive Service Corps 257 Park Avenue South New York, NY 10010 (212) 529-6660 MANUALS Hatry, Harry P., Richard E. Winnie, and Donald M. Fisk. 1981. Practical Program Evaluation for State and Local Governments. Second edition. The Urban Institute Press: Washington, DC. This manual is aimed at the government analyst, both at the central staff and operating department level, as well as the overall manager or administrator. Provides specific steps for conducting an evaluation, including the need to identify specific program objectives, to specify criteria for measuring progress toward these objectives and the magnitude of program effects, to identify the population segments that are likely to be affected by the program, and to determine which program impact data should be assembled. Hawkins, J. David, and Britt Nederhood. 1987. Handbook for Evaluating Drug and Alcohol Prevention Programs: Staff/Team Evaluation of Prevention Programs (STEPP). U.S. Department of Health and Human Services, Center for Substance Abuse Prevention. This manual presents an overview of conducting program evaluations through a team approach. The manual is directed toward program administrators and delineates six steps that they may undertake to evaluate their own programs. Includes discussions on why evaluations are needed, program staff's conceptions of evaluations, and the connection between evaluations and accountability. Using the team approach, the manual provides 10 meeting agendas designed to maximize the staff's or team's involvement in the evaluation process. Herman, Joan L., Lynn Lyons Morris, and Carol Taylor Fitz-Gibbon. 1987. Evaluators Handbook. Center for the Study of Evaluation, University of California, Los Angeles. This handbook is part of a nine-volume program evaluation kit. Includes identification of specific evaluation models and their emphasis and clarifies the issue of why evaluations are important. Uses step-by-step guides for conducting formative and summative evaluations with worksheets for the user to complete. Kumpfer, Karol L., and Gail H. Shur et al. 1993. Measurements in Prevention: A Manual on Selecting and Using Instruments To Evaluate Prevention Programs. U.S. Department of Health and Human Services, Center for Substance Abuse Prevention: Rockville, MD. The focus of this manual is to assist program evaluators and administrators of alcohol and drug abuse prevention programs and other human-service-related fields in identifying measures, issues, and instruments relevant to individual programs and populations. The manual includes an overview of the components of evaluations, discussions on what and how to measure, a compendium of standardized outcome instruments, and suggestions for rating and selecting appropriate instruments. Littell, Julia H. 1986. Building Strong Foundations. Family Resource Coalition: Chicago, IL. This document uses the graduated or tiered approach to evaluations developed by Francis Jacobs and used by the Harvard Family Research Project. The manual includes a well-worked-out, step-by-step approach. Minnesota Prevention Resource Center. Prevention Program Self- Evaluation Handbook. Chemical Dependency Program Division, Minnesota Department of Human Services. This manual provides a step-by-step approach for planning and con- ducting evaluations of prevention-related programs. Based on the Office of Substance Abuse Prevention's STEPP handbook, the manual includes discussions on the types of evaluations, evaluation designs, development of instruments, preparation of evaluation reports, and evaluation costs. Directed to managers interested in self-evaluation, the manual also includes a series of checklists to address feasibility issues. Muraskin, Lana D. 1993. Understanding Evaluation: The Way to Better Prevention Programs. Westat: Rockville, MD. This guidebook was written to assist school and community agency staff in carrying out required evaluations under the Drug-Free Schools and Communities Act. Provides a focus on the importance of evaluations and the different dimensions of evaluations and addresses some of the practical problems involved in conducting evaluations, such as confidentiality and informed consent. Also includes a discussion of how program staff often feel about evaluations (that is, they create anxiety or are seen as risky) and suggests procedures that may offset these concerns. Prevention Plus III. Assessing Alcohol and Other Drug Prevention Programs at the School and Community Level. A Four Step Guide to Useful Program Assessment. U.S. Department of Health and Human Services, Center for Substance Abuse Prevention: Rockville, MD. This guide is designed to present basic information and provide step-by-step procedures using a workbook format. Focused on alcohol and other drug prevention programs, the guide includes discussions on why a program administrator may want to conduct a program assessment and special considerations relevant to evaluating prevention programs. Office for Substance Abuse Prevention, Department of Health and Human Services, Administration on Drug Abuse, Mental Health and Alcohol. 1991. Chapter 7: Assessing the Impact of Prevention Efforts. In The Future by Design: A Community Framework for Preventing Alcohol and Other Drug Problems Through a Systems Approach. This manual provides a framework for developing and implementing substance abuse prevention systems in communities. Provides a discussion on evaluation, including basic steps such as developing a mission statement and identifying goals and objectives. Emphasizes the relevance of evaluation to programs and shows that evaluation is part of program operations rather than something that takes place outside of program operations. Also points out the specific issues that need to be considered relevant to community prevention efforts. U.S. General Accounting Office (GAO), Program Evaluation and Methodology Division, P.O. Box 6015, Gaithersburg, MD 20884-6015. (202) 512-6000. This office publishes a series of evaluation handbooks that address different topics. Single copies are available free from GAO. Titles in the series include: Designing Evaluations GAO/PEMD-10.1.4 Using Structured Interviewing Techniques GAO/PEMD-10.1.5 Developing and Using Questionnaires GAO/PEMD-10.1.7 Case Study Evaluations GAO/PEMD-10.1.9 Quantitative Data Analysis: An Introduction GAO/PEMD-10.1.11 SELECTED BIBLIOGRAPHY The following books, articles, and manuals can provide further information on planning and conducting program evaluations. Babbie, Earl R. 1973. Survey Research Methods. Belmont, CA. Wadsworth Publishing Company, Inc. Berk, Richard A., and Peter H. Rossi. 1990. Thinking About Program Evaluation. Newbury Park: Sage Publications. Black, Thomas R. 1993. Evaluating Social Science Research: An Introduction. London: Sage Publications. Borus, Michael E. 1979. Measuring the Impact of Employment- Related Social Programs. Kalamazoo, MI: The W.E. Upjohn Institute for Employment Research. Corcoran, K.J. 1994. Measures for Clinical Practice: A Sourcebook. Vol I: Couples, Families, and Children. New York: Free Press. Covert, R.W. 1977. Guidelines and Criteria for Constructing Questionnaires. University of Virginia: Evaluation Research Center. Fink, Arlene, and Jacqueline Kosecoff. 1985. How To Conduct Surveys: A Step-by-Step Guide. Newbury Park: Sage Publications. Fink, Arlene, and Jacqueline Kosecoff. 1978. An Evaluation Primer. Beverly Hills: Sage Publications. Fitz-Gibbon, C.T., and L.L. Morris. 1987. How To Design a Program Evaluation. Newbury Park: Sage Publications. Fowler, F.J. 1993. Survey Research Methods. 2d ed. Newbury Park: Sage Publications. Fowler, F.J., and T.W. Mangione. 1989. Standard Survey Interviewing: Minimizing Interviewer-Related Error. Newbury Park: Sage Publications. Freeman, H.W., G.D. Sandefur, and P.H. Rossi. 1989. Workbook for Evaluation: A Systematic Approach. Newbury Park: Sage Publications. Gomby, Deanna S., and Carol S. Larson. 1992. "Evaluation of School- Linked Services." The Future of Children. Hawkins, J. David, and Britt Nedehood. 1987. Handbook for Evaluating Drug and Alcohol Prevention Programs. Washington, DC: Government Printing Office. Hedrick, Terry E., et al. 1993. Applied Research Design: A Practical Guide. Newbury Park: Sage Publications. Herman, J.L. 1987. Program Evaluation Kit. 2d ed. Newbury Park: Sage Publications. King, Jean A., L.L. Morris, and C.T. Fitz-Gibbon. 1987. How To Assess Program Implementation. Newbury Park: Sage Publications. Kostelnik, Marjorie. 1987. Program Evaluation: How To Ask the Right Questions. Child Care Information Exchange. 56. Krueger, R.A. 1988. Focus Groups: A Practical Guide for Applied Research. Newbury Park: Sage Publications. Levitan, Sar, and Gregory Wurzburg. 1979. Evaluating Federal Social Programs: An Uncertain Art. Kalamazoo, MI: The W.E. Upjohn Institute for Employment Research. Morris, Lynn L., C.T. Fitz-Gibbon, and M.E. Freeman. 1987. How To Communicate Evaluation Findings. Newbury Park: Sage Publications. Moskowitz, Joel M. 1993. "Why Reports of Outcome Evaluations Are Often Biased or Uninterpretable: Examples from Evaluations of Drug Abuse Prevention Programs." Evaluation and Program Planning 16, 1-9. Orlandi, M.A., R. Weston, and L.G. Epstein. 1992. Cultural Competence for Evaluators. Rockville, MD: Center for Substance Abuse Prevention, U.S. Department of Health and Human Services. Patton, Michael Q. 1990. Qualitative Evaluation and Research Methods. 2d ed. Newbury Park: Sage Publications. Patton, Michael Q. 1986. Utilization-Focused Evaluation. 2d ed. Beverly Hills: Sage Publications. Patton, Michael Q. 1982. Practical Evaluation. Newbury Park: Sage Publications. Patton, Michael Q. 1981. Creative Evaluation. Beverly Hills: Sage Publications. Pecora, P.J., M.W. Fraser, K.E. Nelson, J. McCroskey, and W. Meezan. 1995. Evaluating Family-Based Services. New York: Aldine de Gruyter. The Program Manager's Guide to Evaluation: A Handbook Series from the Administration on Children, Youth, and Families.1996. Rossi, Peter H., James D. Wright, and Andy B. Anderson ed. 1983. Handbook of Survey Research. San Diego: Academic Press, Inc. Rossi, P.H., and H.E. Freeman. 1989. Evaluation: A Systematic Approach. 4th ed. Newbury Park: Sage Publications. Stecher, B.M., and W.A. Davis. 1987. How To Focus an Evaluation. Newbury Park: Sage Publications. Sudman, Seymour, and Norman M. Bradburn. 1986. Asking Questions: A Practical Guide to Questionnaire Design. San Francisco: Jossey-Bass Publishers. U.S. Agency for International Development. 1986. An Approach To Evaluating the Impact of AID Projects. Washington, DC: AID. U.S. Agency for International Development. 1985. Selecting Data Collection Methods and Preparing Contractor Scopes of Work. Washington, DC: AID. U.S. Agency for International Development. 1982. Toward a Health Project Evaluation Framework. Washington, DC: AID. U.S. Agency for International Development. 1979. Manager's Guide to Data Collection. Washington, DC: AID. Weiss, H.B., and F.H. Jacobs. 1988. Evaluating Family Programs. New York: Aldine de Gruyter. Wholey, Joseph S., Harry P. Hatry, and K.E. Newcomer, ed. 1994. Handbook of Practical Program Evaluation. San Francisco: Jossey- Bass Publishers. Wholey, Joseph, and John Scanlon et al. 1976. Federal Evaluation Policy. Washington, DC: The Urban Institute. Wolcott, H.F. 1990. Writing Up Qualitative Research. Newbury Park: Sage Publications. Yin, R.K. 1989. Case Study Research: Design and Methods. Newbury Park: Sage Publications.