How to answer a performance evaluation

how to answer a performance evaluation

U.S. Department of Housing and Urban Development, Office of Policy Development and

Research. A Guide to Evaluating Crime Control of Programs in Public Housing.

Washington, DC: Prepared for the U.S. Department of Housing and Urban Development

by KRA Corporation; 1997. pp.5.1-5.15.

An evaluation plan is a written document that states the objectives of the evaluation, the questions that will be answered, the information that will be collected to answer these questions, and when collection of information will begin and end. You can think of the evaluation plan as the instructions for the evaluation. This plan can be used to guide you through each step of the evaluation process because it details the practices and procedures for successfully conducting your evaluation.

Once the evaluation plan has been completed, it is a good idea to have it reviewed by selected individuals for their comments and suggestions. Potential reviewers include:
  • Public Housing Agency (PHA) administrators who can determine whether the evaluation plan is consistent with the agency's resources and evaluation objectives.
  • Housing staff who can provide feedback on whether the evaluation will create an excessive burden for them and whether it is appropriate for residents.
  • Professional evaluators.
This chapter describes the components for an evaluation plan and provides an outline for preparing a plan. Although you may never need to develop one without assistance, it is helpful for you to know what a plan is and how it is being used by the evaluator you select. The information contained in this chapter will help you:
  • Work with an experienced evaluator (either an outside evaluator or someone within your PHA) to develop a plan.
  • Obtain a basic understanding of what should be included in an evaluation plan to assist you as you review one.
A sample evaluation plan outline that may be used as a guide appears on the following pages. The major sections of the outline are:
  • Section I: A description of the evaluation framework which specifies what you want to evaluate, what questions are to be addressed in the evaluation, and the timeframe for conducting the evaluation.
  • Section II: A description of the program implementation objectives.
  • Section III. A description of the program outcome objectives and performance measures.
  • Section IV: Procedures for managing and monitoring the evaluation.
A. What you are going to evaluate.

The initial program model (assumptions about target population, interventions, short-term outcomes, intermediate outcomes, and final outcomes).

  • Implementation objectives (stated in general and then measurable terms).

    a. What you plan to do, when, and how.

    b. Who will do it.

    c. Participant population and recruitment strategies.
  • Outcome objectives (stated in general and then measurable terms).
  • Context for the evaluation.
  • B. Questions to be addressed in the evaluation.
    1. Are implementation objectives being attained? If not, why not (that is, what barriers or problems have been encountered)? What kinds of procedures facilitated implementation?
    2. Are outcome objectives being attained? If not, why not (that is, what barriers or problems have been encountered)? What kinds of procedures facilitated attainment of outcomes?
      1. Do outcomes vary as a function of program features? Which aspects of the program contributed the most to achieving expected outcomes?
      2. Do outcomes vary as a function of characteristics of the residents or staff?

    C. The timeframe for the evaluation.

    1. When data collection will begin and end.

    2. How and why timeframe was selected.

    II. Evaluating Implementation Objectives---Procedures and Methods

    Question 1: Are Implementation Objectives Being Attained and, If Not, Why Not?

    A. Objective 1: [State objective in measurable terms. Example: Local police representative will attend all planning and resident training sessions. ]

    What to include:
    1. Type of information needed to determine if objective 1 is being attained and to assess barriers and facilitators (that is, performance indicators). Example: Number of planning meetings attended by local police representative.
    2. Sources of information. Include in your plans procedures for maintaining confidentiality of the information obtained during the evaluation.
    3. How sources of information were selected.
    4. Timeframe for collecting information (dates when the data collection is planned to begin and end).
    5. Methods for collecting the information (that is, records reviews, interviews, paper and pencil questionnaires, and observations).
    6. Methods for analyzing the information to determine whether the objective was attained (that is, tabulation of frequencies and assessment of relationships between or among variables).

    B. Objective 2: [Repeat the same information as in 1-6 of objective 1 above.]

    C. Objective 3. [Repeat the same information as in 1-6 of objective 1 above.]

    III. Evaluating Outcome Objectives---Procedures and Methods

    Question 2: Are Outcome Objectives Being Attained and, If Not, Why Not?

    A. Objective 1. [State outcome objective in measurable terms. Example: Decrease robberies on housing development property by 10 percent after 2 months of active security patrols. ]

    What to include:
    1. Types of information needed to determine if objective 1 is being attained (that is, what evidence will you use to demonstrate the change?). Example: Number of robberies committed per month before and after initiation of security patrols.
    2. Sources of information (that is, housing staff, residents, PHA staff, and housing managers) and sampling plan, if relevant.
    3. How sources of information were selected.
    4. Timeframe for collecting information (dates when the data collection is planned to begin and end).
    5. Methods of collecting that information (for example, questionnaires, observations, surveys, and interviews) and plans for pretesting information collection methods.
    6. Methods for analyzing the information to determine whether the objective was attained (that is, tabulation of frequencies and assessment of relationships between or among variables using statistical tests).

    B. Objective 2: [Repeat the same information as in 1-6 of objective 1 above.]

    C. Objective 3: [Repeat the same information as in 1-6 of objective 1 above.]

    IV. Procedures for Managing and Monitoring the Evaluation

    What to include:
    1. Procedures for training staff to collect evaluation-related information.
    2. Procedures for conducting quality-control checks of the information collection process.
    3. Time lines for collecting, analyzing, and reporting information, including procedures for providing evaluation related feedback to housing managers and staff.

    Section 1: The evaluation framework

    This section of the evaluation plan presents the model for assessing your program activities (see chapter 4), program objectives, evaluation questions, and the timeframe for the evaluation (that is, when you will begin and end collection of evaluation information). Section I should also include a discussion of the context for the evaluation, particularly the aspects of the PHA, program staff, and residents that may affect the evaluation. If an outside evaluator is preparing the plan, the evaluator will need your help to prepare this section to ensure that the evaluation is tailored to the needs of your PHA and the residents.

    Section 11: Evaluating implementation objectives

    This section should provide detailed descriptions of what you plan to do, how you plan to do it, and who it is you want to reach. This information will be used to answer evaluation questions pertaining to your implementation objectives, such as: Are implementation objectives being attained? If not, why not? What barriers or challenges have been encountered? What has facilitated attainment of these objectives?

    For each objective, the evaluation plan must describe the following:
    • Types of information needed.
    • Sources of information.
    • Criteria for selecting information sources.
    • Methods for collecting information, such as questionnaires and procedures.
    • Timeframe for collecting information.
    • Methods for analyzing information.

    Types of information needed. Any information that is collected about your program activities or residents can be considered evaluation data. The types of information needed will be guided by the program objectives you seek to assess. For example, when your objective concerns what you plan to do, you will need to collect information on the types of services, activities, or initiatives that are developed and implemented; who received services, and their duration and intensity.

    If the objective of your PHA is to provide increased security patrols at two sites, you will need to collect the following information:
    • Number of resident volunteers.
    • Number of hours in which security patrols operate.

    When the objective concerns who will participate, you will need to collect information about residents' characteristics, the number of residents, how they were selected/recruited, barriers encountered in the selection/recruitment process, and factors that facilitated selection/ recruitment.

    If the objective is to involve 50 residents in a 6-week crime and drug reduction program, for example, you will want to collect the following information:
    • Age, sex, and race of participants.
    • Number of participants previously involved in criminal or drug activity.
    • Number of residents who are participating.
    • Information on how the participants learned

      about the program.

    • Amount of time residents participate in the program.
    • Number of residents who successfully complete the program.
    Sources of information. This refers to where, or from whom, you will obtain evaluation information. Again, the selection of sources will be guided by the objective you are assessing. For example:
    • Information on services can come from program records or from interviews with program staff.
    • Information on residents and recruitment strategies can come from program records and interviews with staff and residents.
    • Information about barriers and facilitators to implementing the program or program activities can be obtained from interviews with relevant staff.

    This section of your plan should also include a discussion of how you will maintain the confidentiality of information you obtain from your sources. In addition, it is wise to develop consent forms for those residents being asked to participate in the evaluation. The consent form should include a description of the evaluation objectives and how the information will be used. More information on maintaining confidentiality and a sample informed consent form appear in chapter 6.

    Criteria for selecting information sources. If your initiative has a large number of staff members and/or residents, you can reduce the time and cost of the evaluation by including only a sample of them as sources for evaluation information. Sampling is a statistically reliable way of identifying a number of persons from the entire group of program participants who are representative of the group. An experienced evaluator will be able to advise you as to whether or not you should select a sample for your evaluation.

    There are a variety of methods for sampling your sources.
    • You can sample by identifying a specific timeframe for collecting evaluation-related information and including only those residents who participate during that timeframe.
    • You can sample by randomly selecting the residents (or staff) to be used in the evaluation. For example, you might assign case numbers to residents and include only the even-numbered cases in your evaluation.
    • You can sample based on specific criteria, such as length of time with the program (for staff) or characteristics of residents, such as age, gender, size of family, and length of time in complex.
    Methods for collecting information. For each implementation objective you are assessing, your evaluation plan must specify what information will be collected (such as questionnaires and procedures) and who will collect it. To the extent possible, the collection of this information should be integrated into ongoing program operations. For example, in training programs, the registration forms for residents and the initial assessments of participating residents can be used to collect evaluation-related information as well as information relevant to conducting the training. There are a number of methods for collecting information including structured and open-ended interviews, paper and pencil inventories or questionnaires, observations, and systematic reviews of agency records or documents. The methods you select will depend upon the following:
    • The evidence you need to establish that your objectives were attained. Performance measures make up this needed evidence. They are the indicators that the program activities reached their intended goals.
    • The information sources, which can be questionnaires, interviews, case records, or observations.
    • Your available resources. You will need to determine if you have the staff and funds available to collect the needed data.

    Chapter 6 provides more information on these sources. The questionnaires of forms that you plan to use to collect evaluation information are usually included as part of your evaluation plan. You will not want to begin an evaluation until you have developed or selected all of the data collection instruments you plan to use. Developing or selecting questionnaires to use for the evaluation may require the assistance of an experienced evaluator.

    Timeframe for collecting information. Although you will have already specified a general timeframe for the evaluation, you will need to specify one for collecting data relevant to each implementation objective. Times for data collection will again be guided by the objective being assessed.

    Methods for analyzing information. This section of your evaluation plan describes the practices and procedures for use in analyzing the evaluation information. For assessing program implementation, the analyses will be primarily descriptive and may involve tabulating frequencies (of services and resident characteristics) and classifying narrative information into meaningful categories, such as types of barriers encountered, strategies for overcoming barriers, and types of facilitating factors. An experienced evaluator can help your evaluation team design an analysis plan. More information on analyzing program implementation information is provided in chapter 7.

    Section III: Evaluating outcome objectives and performance measures

    The practices and procedures for evaluating whether the outcome objectives of your program have been met are similar to those for evaluating implementation objectives. To evaluate outcome objectives you will probably use both qualitative and quantitative performance measures. The performance measures will enable you to answer the following questions:
    • Did residents and/or the community demonstrate changes in knowledge, attitudes, behaviors, or awareness? Performance indicators could include self-reported increased knowledge, a change in attitude, a reduction in criminal activity or violent behavior, and increased awareness of their own or others' violent behavior.
    • Are the changes the result of the program's activities? Are the reported changes in knowledge, attitudes, behavior, or awareness a direct result of your program? Did the changes occur after involvement in the program? Are there other factors that may have influenced the changes?
    Two commonly used evaluation designs that can help you to answer these questions are:
    • Comparison of conditions before and after a program is established.
    • Comparison of conditions before and after a program is established, using a comparison group.

    A comparison of conditions before and after the violence prevention program is implemented requires that you collect information at least twice-once before the program is implemented and then again either sometime after the program has been in effect (when you could expect the program to have had a measurable impact) or after the program has ended. You can collect outcome information as often as you like after the program has been implemented, but you must collect it on residents and/or the community before implementing the program. This information is called baseline information and is essential for demonstrating that a change occurred.

    If you are implementing an education or training program, this type of design can be effective for evaluating immediate changes in participants' knowledge and attitudes. In these types of programs, you can assess residents' knowledge and attitudes prior to the training and immediately after training with some degree of certainty that any changes observed resulted from your interventions.

    A comparison of conditions before and after the violence prevention program is implemented using a comparison group also requires that you collect information at a minimum of two points in time and that you collect information from individuals (or about a housing development or neighborhood) not affected by your violence prevention program. The purpose of a comparison group is to determine if changes you find in your residents or housing development conditions are attributable to your program and not to some other reason. Comparison data might be obtained from the following:
    • Housing development residents not participating in the violence prevention program but who are similar to program participants in most other ways; for example, male teenagers not participating in a midnight basketball program.
    • Crime statistics from a nearby housing development that has characteristics similar to your housing development's characteristics, such as type of development building (that is, highrise or garden apartments), number of teenagers, and level of criminal activity before your violence prevention program was implemented.

    There are obvious cost considerations when including a comparison group in your evaluation design. You must be able to identify a group of individuals or a housing development or neighborhood that is similar to your residents, development, or neighborhood. You must be able to obtain data from such individuals or about the development or neighborhood. Both of these tasks will require some research and additional data collection activity. Although there are additional costs, the information from a comparison group will provide significantly more evidence concerning the effectiveness of your program if your program participants have more positive scores on performance measures than the comparison group. You can state with more certainty that your program was effective in bringing about the observed change and that this is not due to some other reason.

    Pretesting information collection instruments. Your evaluation plan will need to include a discussion of your plans for testing out your questionnaires before using them for evaluation. This process is commonly referred to as pretesting. Chapter 6 provides information on pretesting instruments.

    Analyzing participant outcome information. Your plan for evaluating outcomes should include a description of how you intend to analyze the data that has been collected. The analyses are intended to answer the questions about whether change occurred and whether changes that occurred can be attributed to your program.

    Source: www.bja.gov

    Category: Taxes

    Similar articles: