Thursday, July 24, 2008

Evaluating Anti-Gang Efforts

Evaluating Anti-Gang Efforts

Evaluations provide invaluable information for decision makers, document the program so it may be replicated elsewhere, and enable public agencies to justify program costs.
Evaluation steps include:
q Specifying goals and objectives related to the reduction of harm associated with gang problems.
q Specifying the target population and time during which the program will operate.
q Describing the program’s activities in detail, directly linking activities with program objectives.
q Constructing a logic diagram of the program that represents the cause and- effect relationships between activities and accomplishments.
q Developing comparisons that show whether the program had the intended effects on the target population.
q Specifying other factors that might account for changes in the target population.
q Designing data collection instruments.
q Developing and analyzing comparisons, which is the data analysis portion of the evaluation.
q Drawing conclusions.
A process evaluation addresses the elements that characterize the operations and functions of a program, such as organizational structure, policies and procedures, human and technical resources, goals and objectives, and activities.
Process evaluations enable managers to shape the program and make midcourse corrections if necessary. Process evaluations also link objectives and strategies. A third use is providing helpful information to other communities interested in building on a particular gang initiative in the future.
Finally, process evaluations garner support for the program from participants and others in the community.
A logic diagram is a valuable tool that traces a program’s elements from the goals to the specific activities. By presenting a graphic illustration of a program’s logical structure, this diagram aids the evaluation process, helps managers implement and operate the program, and spells out activities. In addition, this exercise points toward potential measures of effectiveness.
Data needs will vary with each program’s focus, activities, and specific strategies. Process evaluations use both quantitative and qualitative data, with emphasis usually on the latter. Questions that focus on characteristics such as program emphasis and possible barriers to implementation often require a detailed knowledge of the program and its target. This is best accomplished with qualitative data that allow the evaluator to glean program insights. Open-ended interviews are useful tools because they provide flexibility for the respondent to elaborate and for the interviewer to explore.
Some types of quantitative data are appropriate for assessing program growth and development. Interim process measures, such as the number of youth reached in programs and the number and type of arrests, tell the evaluator and program staff whether the project’s process objectives are being met and whether the program is moving in the intended direction. Evaluation of the effects of a program involves several design issues that decide the scope and focus of the impact evaluation. These issues include causality, the proper unit of analysis, the various levels of effects expected, the selection of appropriate data, the basics of data collection, and quasi experimental designs.
The unit of analysis for an impact evaluation corresponds to the focus of program activities. If the action being taken aims at individual-level change (for example, improving parenting skills of teenage mothers), an impact assessment must focus on the individual level of their parenting behaviors.
On the other hand, if the focus of project activity is to reduce fear of gangs among residents in an area, the neighborhood is the proper unit of analysis.
As with other aspects of evaluation, decisions about appropriate measures depend on project objectives and activities. One of the most basic issues about any measure is its validity. Data needs depend on the type of change sought by the intervention. Appropriate use of data depends on the nature of the program and interventions.
Whatever kind or combination of data is appropriate; it must be collected systematically and uniformly throughout the evaluation. The instruments or protocols needed will vary depending on the design and the kind of data sought.
Obviously, the ideal is a well-implemented program with a high level of success. Whatever the outcome, however, a well-conceived evaluation in which the process and impact portions are linked enables evaluators to make an informed assessment.

http://www.allaboutlongisland.com

No comments: