Evaluation of Social Intervention Programs: Types, Planning, and Structure
Types of Assessment
- The type of assessment will depend on the objectives to be achieved in the evaluation and the phase of the intervention process, following the logic of social intervention.
- In the design stage of social intervention, diagnostic evaluation and design are the backbone of any evaluations.
- When a change is underway but has only been implemented for a short time, the evaluation of the implementation is appropriate, even if one can discuss results and economic evaluation.
- When changes are established and have been running for some time, it is logical to focus on outcome evaluation, impact, economic aspects, and monitoring.
- We can distinguish five types of evaluations:
- Needs Assessment: Aims to define and meet the needs of a social group or context to address it.
- Design Evaluation: Aims to identify the weaknesses of the program.
- Implementation Evaluation: Allows one to determine if the program is operating as expected in a given time and context.
- Outcome Evaluation: Seeks to determine whether the objectives outlined in the design have been achieved in terms of results and effects.
- Meta-evaluation: Involves assessing the design of the evaluation plan.
Planning the Evaluation: The Evaluation Plan
- We focus on the general structure of the evaluation, the internal structure of the Evaluation Plan, and the main elements. It can be applied to the first four types of evaluation, taking into account the peculiarities of each one.
- Each assessment is composed of four stages or phases (Table 1). The structure is the most extensive and specific phase that corresponds to the “Evaluation Plan”, which is to clarify and prioritize the contents to assess, define the techniques that we will use, including the evaluation criteria.
- Since an article by Cronbach in 1963, assessment has taken a turn. It begins to set aside, in part, the positivist orientation, moving to one that encourages reflection and generates new ideas and areas of change and methods to support them.
- Cronbach affirms that the development of a research plan for evaluation is a complex art that requires an open mind, political awareness, and good communication between the planning and implementation stages. His proposal is shown in Figure 1.
- Planning resolves the problem of how to allocate investigative resources, based on a selection of priorities, as well as practical and political considerations. This emphasizes the implementation of a proactive and flexible planning approach. Cronbach believes that planning must be “a reflective process and developed.”
- Planning should be conceived as a joint effort, with shared responsibility among a team, and performed at two levels:
- At the first level, establishing priorities and responsibilities among the members, called divergent, which lists the possible issues to consider, making a detailed overview of the origin of such issues or problems, treatment processes, and the targets set.
- In a second phase, a detailed internal planning team is formed. This phase is aimed at obtaining plans built on the experience and the mutual interaction of all members, called convergence, where a ranking is assigned among the various issues raised. We move from the ideal to the real and possible.
- Thus, planning becomes a thoughtful effort to integrate the processes and avoids the restrictive nature of focusing exclusively on objectives. This is what McDonald and others call democratic evaluation, which involves planning on a committee representative of all involved: professionals, users, and public administration. This achieves a basic principle: developing a sense of belonging by reaching a consensus on the major elements of the operation. In organizing the assessment process, the first step is to identify groups with an interest in the program (Figure 2, Patton, pp. 191).
- Among the aspects to evaluate are all those related to the actions of professionals, such as training, knowledge of the subject, organization of work, motivation, dedication, impact, relationships, and results. Ensuring that instruments, logistics, and the process will be evaluated and revised periodically is another function of the committee in the planning stage.
- Once the planning of the evaluation is complete, we move to the research design, defining the scope of the assessment and observations on the social reality in which the program is being implemented. Cronbach calls this phase “Observe/Ask” and “Implementation of the Evaluation.” This stage involves three fundamental principles, whose initials form the word UTO:
- U: Units. These are the individuals or groups to be evaluated.
- T: Treatment. The method of carrying out the evaluation plan; how to evaluate.
- O: Observation Operations. The method of collecting information: testing, observation, application of the evaluation plan.
- Sometimes, an “S” for “setting” is added, which refers to the action framework of the program to be evaluated. A distinction is also made between:
- “UTO”, which refers to the ideal planning,
- and “Uto”, which refers to the specific example, to the sample rather than the population.
- Following Cronbach’s model, the process to be followed involves, first, constituting the evaluation team, responsible for planning and implementing the evaluation.
- In the planning stage, the team analyzes the advantages and disadvantages of the assessment, the aspects to evaluate, those involved, timing, techniques, and tools of evaluation. This is the divergent phase. In the design phase, there is convergence among all concerned to negotiate all aspects, including the process and aims.
- In the implementation stage, the evaluation units and the processes that each one develops are involved, using the selected techniques and instruments.
- Analysis is the phase where we study the information collected and consider the results and effects of the intervention program.
- The last stage is the value judgment, making it necessary to set the criteria on which to perform it and who is responsible for it.
- The elaboration of the evaluation is important and highly relevant in the evaluation process. It is where decisions are made that will shape the type of evaluation, its structure, and the stages of assessment. It also identifies and closes the content, evaluators, and parameters on those who make value judgments. It is important to remember that observation, analysis, and judgment correspond to the “Implementing the Evaluation Plan.”
The Structure of the Evaluation Plan
- Regarding the description of the program, Weis made it clear that “as important as conceptualizing the desired results is to conceptualize the nature of the program…” What matters is familiarization with the program, each and every one of the peculiarities of the object of study, what is intended to be achieved, how it is intended to be followed, the target audience, and, above all, where it is planned to be applied or is being implemented.
- Each program is conducted in a social framework that has implications for its effectiveness. The immediate context is the organization that funds and carries out the program, but in some cases, it is necessary to consider the broader policy framework. Are we able to ensure that results are due to the effects of the program or other variables beyond our control? To answer this and other questions, a detailed study of the program is necessary.
- It is also necessary to know the literature, as it will give us ideas of how to design your evaluation plan, compare and draw parallels and differences, and avoid mistakes that others have studied and solved. This is known as the state of the art or a history of the question. Then, decisions must be made regarding each of the following evaluation aspects:
- Depending on the position of the assessor:
- Internal or self-assessment is the process by which program makers themselves analyze their performance and raise the following objectives:
- To see if they are doing what was proposed.
- To see if they are getting what was sought or something else.
- External evaluation occurs when external actors analyze the program’s functioning to obtain an overview of the degree of actual program development.
- Internal or self-assessment is the process by which program makers themselves analyze their performance and raise the following objectives:
- According to the ultimate purpose of the evaluation of programs and projects:
- Accountability refers to criteria of social efficiency. In social action programs using public funds, consideration should be given to ensuring that work is being carried out seriously and effectively. Thus, “accountability” requires some evaluation of cost-effectiveness.
- Comparing programs (Assessment) involves measuring the effects of a program in numerical terms, such as:
- The number of beneficiaries of each program.
- Verifying the costs per unit.
- Differentiating between programs by the results obtained.
- Establishing a “ranking” among the various programs at regional or national levels.
- Determining the staff time involved.
- Knowing the level of access of beneficiaries in each service, etc.
- Carrying out a formative evaluation contributes to the improvement of the program. It is performed over the life of the program and verifies the validity of all components in achieving the objectives. It is planned primarily to enable decision-making to restructure the components according to the initial objectives.
- Evaluating the process (summative evaluation) is aimed at the final results of a program and is used to adapt, continue, implement, or reject it. It provides information to the sponsors on the level reached in relation to the objectives and serves three major functions:
- To witness the achievement of the objectives.
- To certify the status and capacity of the program.
- To check its validity.
- Another decision to make is whether the evaluation will be global, partial, or mixed.
- If global, molar, or holistic, all parties, all involved, and all steps must be taken into account in a globalizing manner. It must respond to the principle of unity.
- If partial or molecular, the evaluation focuses on one aspect of a whole, such as beneficiaries, expected or unexpected outcomes, or evaluating only some phases or subphases of design, implementation, and results.
- A joint assessment is a combination of both, starting with a general overview and, through it, detecting (diagnostic function) levels, functions, relationships, or behaviors that seem to show difficulties, conflicts, and irregularities. This allows for decisions for improvement and further evaluation located in these areas, in a more detailed and in-depth manner.
- According to the perspective that is part of the evaluation. General trends:
- Positivist/quantitativist perspective, called evaluation research, is characterized by its concern for controlling variables and measuring outcomes expressed numerically. Its base is in psychology and the sciences. It emerged after the publication in 1966 by Campbell and Stanley of pre-experimental, experimental, and quasi-experimental designs, acquiring the rank of the dominant paradigm. In the early 1980s, it entered a crisis because “the realization of genuine social experiments is very difficult or impossible, always requires a great financial effort, taking a long time, so their results are often not useful or usable.” Following such criticism, a qualitative concept of evaluation appeared.
- Naturalistic/qualitativist perspective, called ethnographic. According to Coox and Reichardt, its interest lies in describing the observed facts to interpret them within the global context in which they occur, in order to explain the phenomenon. Its basis is in ethnography and sociology. Its aim is “to provide a comprehensive view of the whole program.” There are more numerous examples of quantitative assessment, and evaluation is also considered from the perspective of complementarity.
- Depending on the objectives to be achieved in evaluating a social intervention program. These may vary depending on the origin of the initiative for the assessment and its more immediate claims. Evaluation involves making a value judgment, but conducting an evaluation study means providing enough information to allow such work to be clear. Otherwise, one can fall victim to the complexity of interests that are implicit in most social intervention programs.
- One objective is to improve the quality of social intervention to meet identified social needs.
- Another is to identify the strengths and weaknesses in programs to refine the intervention, with the help of institutional infrastructure.
- According to the interests of those responsible, they may need an assessment to enable them to clarify certain issues. However, according to the users, they may expect very different objectives. It is therefore necessary to understand the implicit and explicit goals. “The plurality of interests can impact in different ways in the evaluation process. First of all, the difficulty of defining the best perspective from which to address evaluation” (Subirats). Therefore, evaluation professionals must pinpoint targets, differentiating between:
- General objectives, including knowledge of the implementation processes and results of public intervention.
- Specific objectives. Among them, depending on the purpose of evaluation:
- Provide different proposals in decision-making.
- Knowing the views of users.
- Knowing the extent to which the program is being applied as planned.
- Knowing the side effects resulting from the implementation of the program, such as the resolution of conflicts in the organization.
- Improving practices and procedures.
- Adding or rejecting specific technical strategies for the program.
- Knowing the program’s efficiency and effectiveness.
- With regard to approaches and models, House groups them into six types for analyzing reality, grouped into two blocks: inductive and deductive models. The overview of the approaches that the evaluation of social action programs can take reflects the nature of the diversity and evolution followed by the discipline through various models. These range from the positivist perspective, copied from the natural sciences, to the methodological diversity that gives access to the specific dimension of the object.
- Approach based on behavioral objectives (or targets). This approach analyzes the objectives and determines whether they have been achieved and whether there is a discrepancy between goals and results. It takes into account productivity and accountability, without worrying about efficiency. It is widely used in education and in government. Promoted by Tyler, it is simple and straightforward but has limitations. It can determine whether a program has achieved its objectives but does not explain how they were achieved or why they failed, nor does it measure unexpected outcomes. It requires the specification and definition of measurable objectives, which can be difficult or impossible.
- The approach that ignores the objectives is a direct reaction against the previous one. Scriven argues that the evaluator should not only base their assessment on the objectives but should also avoid information about them to prevent bias. This should be seen in the context of Scriven’s concern for reducing the effects of bias in evaluation. Among its limitations are:
- During the evaluation, evaluators are continually requested, requiring special care to maintain vigilance on the assessment.
- Evaluators operate similarly to a detective who must discover and extract key information, or a judge who must determine connections. Regarding the modus operandi, the evaluator, like the researcher, has to establish a causal relationship between cause and effect.
- Decision-making approach. This approach recognizes the connection between evaluation and decision-making, although there may be several managers and different ways in which decisions are made. The evaluation must be structured based on decisions, which often refers to ultimate responsibility. The methodology is based on surveys or interview techniques, and the evaluator focuses more on changes in the environment where the program is developed than on trying to set up experiments.
- The systems analysis approach focuses on measuring results. The product is to measure the efficiency of the program. Data are quantitative, and measures of the results are related to processes using statistical techniques. It requires a good experimental model and aims to be an assessment that is as objective as possible. The main problem is that, while providing valid and reliable evidence, it can fall into excessive reductionism that is inconsistent with human complexity. Economists and managers are most inclined to this model.
- Professional review approach (ID). This can be very useful in evaluating the design and implementation of intervention programs, especially when conducting an overall evaluation of the program.
- Case study or negotiation focuses on the review of parts of the program, using the perception of the subjects involved and the assessor. It aims to improve understanding with recipients by presenting how they see others. The main question that arises is how the program is perceived. Limitations include:
- Authenticity: the foundations for trusting the results.
- Setting limits on research.
- Restriction to the categories in which data can be assimilated and understood.
- To solve the problem of evaluating bids, the approach seeks to represent all significant values in the case study, drawing on their criteria and standards, and leaving the reader to weigh and balance these elements.
The Meta-Evaluation of the Evaluation Plan
- “Start at home.” Scriven argues that Meta-Evaluation should become a professional imperative of evaluators: first, evaluate our own work and then that of others. Defined as a “process of analysis to make value judgments on the evaluation itself,” it involves assessing the design of the evaluation plan to implement any program. It is also called “evaluability assessment.” Stufflebeam defines it as “the process that aims to assess and describe an evaluation activity, and judge it under certain criteria that characterize a good evaluation.”
- Fernández Ballesteros considers the intentions of those responsible and defines it as a “process that aims to establish criteria that allow us to determine the quality of developments and choose those that best suit the case and the needs and aspirations of the sponsoring organizations of programs.”
- Following the logic of the intervention, it should take place before deployment, and we approach a design evaluation with the intention of identifying any weaknesses in the evaluation plan. However, the prefix ‘meta’ suggests it should occur after the evaluation. Scriven defends this position and highlights the following contributions of meta-evaluation:
- It is an assessment of merit for the efforts made in the appraisal.
- It is useful for making decisions, assigning responsibility, and providing information for decision-making and for future evaluators in their future assessments.
- It is a formative evaluation that provides feedback with new knowledge.
- According to Stufflebeam, it seeks to determine the usefulness of evaluation reports, a way to enhance the evaluation, to check whether it has been useful for subsequent practices or to those responsible.
- In practical analysis, the objective is to note that the evaluation plan is feasible and to identify any weaknesses prior to implementation. Meta-evaluation is similar to assessment based on professional opinion rather than objective measurement of results.
- To operationalize the entire process of meta-evaluation in a clear manner, Table 3 summarizes the three key issues of any evaluation process:
- What to evaluate? The answer requires specifying the following:
- The evaluation plan provides a profile of the program to be evaluated, including:
- What is to be achieved with the program (objectives).
- The degree or amount of the condition to be achieved (variables, targets, and indicators).
- The sector that will benefit from the program (recipients or users).
- The geographic area covered (spatial coverage).
- How it is done and what procedures are used (methods and activities).
- The components of the program (human and financial resources).
- When the program will be run.
- Defines the organizational level at which the evaluation will be conducted. This should answer questions such as:
- Whether the assessment tools to be used will measure what is truly important for results.
- Whether the chosen variables provide the information needed.
- Whether the chosen sample is the right one.
- Whether the objectives are clearly defined.
- Whether there is a schedule for assessment.
- Whether there are sufficient financial and technical resources.
- Whether the model is the most appropriate assessment.
- Whether the evaluation design responds to the peculiarities of the program.
- Who will conduct the assessment.
- Specifies what to evaluate. This involves establishing whether indicators have been identified for each and every one of the aspects to be evaluated. The evaluation plan needs to focus on relevant issues and meet the objectives. Some of the issues are:
- How does the program link to the context?
- How are the biases of political interests at stake controlled?
- Who are the beneficiaries of the assessment?
- What information is required to make a quality assessment?
- Collects the analysis plan and controls the results of fieldwork. Statistical methods and qualitative analysis are important in this type of evaluation. The evaluation plan must specify what is going to be used in the analysis because the results will depend on it, and it will identify what type of analysis can be done.
- Outlines the characteristics of the evaluation report. This includes how the results will be presented: will they be merely descriptive, or will they provide an explanation of each fact? Another key issue is the scales on which value judgments will be made: are they specified? Who makes the judgment?
- The evaluation plan provides a profile of the program to be evaluated, including:
- Who does the meta-evaluation? Who are responsible for making value judgments about the merits or shortcomings of the evaluation plan? It is advisable for them to be experts, but not in all cases. Each evaluation plan has its own peculiarities that need to be studied.
- How to assess? Which technique or techniques and procedures can be used in the collection of information? Two possible techniques are identified:
- The Delphi study is based on the analysis of expert opinion in a field as broad and varied as possible on the same topic to reach a consensus on certain trends. It has all the potential to provide a rich and integrated vision. Two aspects make up its methodology:
- Strict confidentiality, which allows for greater objectivity.
- The use of a very brief second questionnaire that attempts to classify the areas of apparent disagreement on the preliminary results and allows for reaching a consensus or unifying the findings.
- Rating Questionnaire Evaluation Plan, covering all relevant aspects of the plan, such as objectives, activities, resources, etc.
- The Delphi study is based on the analysis of expert opinion in a field as broad and varied as possible on the same topic to reach a consensus on certain trends. It has all the potential to provide a rich and integrated vision. Two aspects make up its methodology:
- The report of the meta-evaluation plan is drawn up once the opinions of experts have been gathered and must include the results and recommendations for improvement. Based on this report, the creators of the plan will be in a position to make the most appropriate decision: to implement it as it was designed, to make improvements, or to discard it.
- Meta-evaluation, like the work of evaluators, must respond to criteria of validity. As Stufflebeam said, “the aim of Meta-evaluation is to ensure the quality of evaluative services, avoid or deal with illegal practices or services that are not in the public interest, pointing the way for the improvement of the profession and to promote greater understanding of evaluative enterprise.” Notes on the performance criteria of meta-evaluation:
- Internal validity.
- External validity.
- Reliability.
- Objectivity.
- Relevance.
- Importance.
- Reach.
- Credibility.
- Cronbach introduces an important criterion in meta-evaluation: the reduction of uncertainties related to the selection of priority questions. It should be considered whether the questions are clear, relevant, and accurate. He introduces concepts such as increasing effectiveness and the uncertainty of priority, noting that sometimes questions are generated about the underlying elements of the program. Cronbach’s questions may be helpful:
- Are the formulated assessment questions the right ones? Do they derive from the identified problem and its causal relationships?
- Do they identify key elements of this problem?
- Do they help reduce the uncertainty that needs to be clarified with the evaluation?
- Are the voices of all users included in the formulation of evaluation questions?
- Is the chosen methodology optimal for responding to the selected questions?
- Do the questions truly guide the chosen method, or is the method preferred over the questions?
- Have the assessment questions been reviewed, and if necessary, have they been restated in accordance with changes generated in the context of program evaluation?
- Do the evaluation findings identify where change is possible?
- What information must be generated, and what information is not necessary?
Implementation and Evaluation Plan
- Once the design of the evaluation plan has been evaluated, the team is in a position to carry out the evaluation of the social intervention program. Before starting the plan, a pilot study should be conducted to test the selected techniques for obtaining information. If a questionnaire is used, problems with the meaning of questions, listening, language, and timing need to be addressed.
- After developing the instruments, the next step is the collection of information, one of the key moments, as it will be the basis on which value judgments are made. The responsibility that comes from the impact on people must not be forgotten.
- At the end of the evaluation, the publication of the evaluation report is considered, especially when what is being assessed is the work of professionals involved in social intervention. They have the right and obligation to receive feedback from users through a well-managed program evaluation.
- If professionals are the ones who must decide to whom to send the results, this raises the issue of confidentiality and security. In the process of evaluation by professionals, the following precautions are advised for users:
- Avoid anxiety-provoking assessments, conflict, and confusion. If the assessment work destroys the environment and affects the organization of social action, it is worthless.
- Avoid establishing an evaluation program in times of crisis and conflict.
DIAGNOSTIC EVALUATION UNIT 8
Definition of Diagnostic Evaluation
It is intended to define and meet the needs of a group or social context to address them. It reflects the process of defining the problem, the group, and the need for social intervention and answers the following evaluative questions:
- What is/are the problem(s)?
- What are the needs?
- What are the causes of the problem?
- Who is affected and to what extent?
The Structure of the Diagnostic Evaluation Plan
The evaluation plan is a task of utmost importance in the process; it is where all the decisions of the diagnostic evaluation are made (Table 1).
- The description of the social context to be evaluated is the first approach and familiarization with reality, making a physical description and responding to:
- Where is this context located?
- What is its history of formation?
- How has it been formed?
- What events have shaped its reality?
- The contextualization of the entity being evaluated with other entities, knowing how it relates to other nearby contexts. A neighborhood is not an independent unit; it shares common elements with others. This is to ensure, once the evaluation is complete, that the results are due to the reality itself or other variables beyond our control.
- The state of affairs or history. It is also necessary to know the literature, especially in reference to that context, which will give us ideas for designing the evaluation plan, comparing and drawing parallels and differences, to avoid mistakes that others have identified, studied, and resolved.
- Evaluative options. It is advisable to conduct an external evaluation, without this affecting the ability to be internal, as in the case of wanting to know the reality of a therapeutic community; those involved in the scheme could conduct it.
- The purpose of the diagnostic evaluation can be quite varied:
- Raising awareness of various social problems and explaining their origin and causes. Others include:
- Accountability for expenditures.
- Comparing two different societies.
- Seeing how reality evolves through various social interventions.
- Evaluating a process of social action in that context.
- Another decision to make is whether the evaluation will be global, partial, or mixed.
- If we opt for a global, molar, or holistic evaluation, all areas and subjects involved will be included in the analysis. It must meet the principle of unity, which explains the development, implementation, and results, and should cover all the contents of Table 1.
- A partial or molecular evaluation focuses on an aspect or part of a whole. Each element is studied separately.
- The combination of a global and a partial assessment is mixed, characterized by having a general knowledge of the context and, through it, identifying problems. Once diagnosed, it focuses on the in-depth study of the most relevant issues.
- The perspective that is part of the evaluation. Both the positive outlook and the naturalist/qualitativist perspective respond to its characteristics. The option for one or the other depends on the object of study and preferences.
- The objectives may vary according to the group or person from whom the initiative originates and their immediate claims. Evaluation entails making value judgments, but conducting an evaluation study involves providing information to enable judging this work without being a victim of political, social, or economic interests.
- Knowing the main social problems of that context.
- Identifying the overall needs of people in different areas and marking the different specific objectives, including:
- Meeting the needs of families.
- Studying the needs of the elderly.
- Knowing the problems of the disabled.
- Assessing the needs of addicts.
- Meeting the needs of immigrants.
Those responsible need a diagnostic assessment that enables them to clarify:
- What is the most disadvantaged population?
- Should more funds be allocated?
- Which population should be a priority?
- Which groups are in a situation of exclusion?
- Which families are at the threshold of poverty?
- The most appropriate approaches for this type of evaluation are:
- Systems.
- Decision-making.
- Aimless.
- Case study.
Example Diagnostic Evaluation: Evaluation of Autonomic Drug Plans
The Government Delegation for the National Drug Plan directed the Research Group “Analysis of Social Problems in Andalusia” at the University of Granada to conduct a diagnostic evaluation aimed at determining the extent of the problem and its characteristics. The specific objectives are:
- Describe the programs for prevention, intervention, care, and social integration of drug addicts.
- Understand their development and operation and analyze some of the results.
- Review the studies and research related to the field of drug addiction.
- Consider the coordination between different administrations.
Diagnostic Evaluation Model
PHASE I: DESCRIPTION: INTENTIONS AND OBSERVATIONS
PROCEDURE
1 – RESPONSIBILITY TO EVALUATE THE AUTONOMOUS REGIONS
A SURVEY OF INFORMATION COLLECTION.
2 – ANALYSIS OF INFORMATION DEI. QUESTIONNAIRE.
3 – MAKE THE FIELD WORK AND PARTICIPANT OBSERVATION
SEMI-STRUCTURED INTERVIEWS.
4 – ANALYSIS AND TRIANGULATION OF THE RESULTS.
5 – PRELIMINARY REPORT.
6 – COMMISSION INTERAUTONÓMICA.
7 – FINAL REPORT
PHASE II TRIAL OF VALUE
RESPONSIBLE
Figure 1. Elaboration.
It is difficult to combine and compare different situations, in areas very different in density and population size, and with particular situations regarding drug use. The study starts with an overview of the status of consumption, the consequences, the structure and operation of the Plan, the resources, the main areas of intervention (Prevention, Intervention Assistance, Social Integration, Studies and Research), and the relationship established with NGOs.
This evaluation has shown the particular situation of the Autonomous Communities, overcoming the “fear” of being assessed and demonstrating the results of this practice. It has been claimed to be useful for making decisions and for professionals.
3.1. The evaluation process: description of the phases
First become familiar with the concepts in the field of drug dependence, differing according to the CCAA, to extract common indicators to make comparisons. From them developed a questionnaire that referred in some items, definitions of concepts, and was structured following blocks:
SECTION 1. Organization and operation of the regional plan:
- Objectives and needs
- Programming
- Evaluation
- Relationship between local autonomous community and
- Structure
- Coordination
- National Strategy 2000-2008
BLOCK II. Human Resources.
SECTION III. Prevention.
SECTION IV. Intervention assistance:
- Programming and evaluation.
- Monitoring of programs
- Functional and organizational structure
- Material resources
- Formative assessment
BLOCK V. Social Inclusion:
- Programming
- Development and implementation of assessment programs
- Resources
BLOCK VI. Studies and Research.
The questionnaire was amended several times. It was to combine the methodology provided from Sociology with unknown reality of the field of drug addiction in its concepts and particularities. Once the pilot study was posted to the various authorities the questionnaire explaining the purpose of the research.
It was a very extensive questionnaire, beginning in the planning process of the actions on drugs, through the functional structure, the organizational, budgeting, evaluation, and in each of the areas of intervention, programs, coordination, activities, resources, problems, etc.
CCAA All you sent with the exception of two who chose not to participate. The area was found more difficulty in the Human Resources Following this stage and despite some problems, the result was rated as positive.
Work began with the annual reports and plans of each Autonomous knowledge of the epidemiological situation of consumption and used as secondary sources of the Spanish Observatory on Drugs and studies of each community. Important differences were found, in some regions was very difficult to find in other studies and the volume was higher.
Then we analyzed the consequences of consumption, both those that affect health, like other social aspects. Finally, the progress report were added two new sections: the referral to the budget and another referring to the relationship with NGOs involved in the field of drug dependence.
Then he added the results of the questionnaires, which had open-ended responses that can be analyzed using qualitative methods. The quantitative analysis was not appropriate, but developed a database using SPSS with epidemiological data to make use of this information.
After an initial descriptive information encountered two problems:
- The questionnaires were sent for a year so the analysis was delayed and there were differences in the date so that in one year could have changed many aspects, so it had to continually update information
- Explanations of some questions were missing and there were matters that could not be collected with this instrument.
Thus began the second phase interviewspersonal responsible for the Community plans for new aspects in the past year, the evolution of the situation of drug addiction, the main coordination mechanisms, the situation of human and financial resources, the latest programs and the overall assessment of each one of the areas of intervention (prevention, intervention, care and social inclusion), the relationship with NGOs and, finally, the strengths and weaknesses at that time.
The scheduling of the interviews: there were three routes, linking to nearby communities and enabled on consecutive days.
After analyzing the interviews and including updated information from new plans, laws, studies, etc.., It prepared the final report of each CCAA and other state-level comparison. The last phase is the responsible provision of PNSD and each community to make the assessment as experts.
3.2. The evaluation report: the general situation of drug abuse in Spain
The assessment is intended to provide an overview of the status of the drug in Spain, here is collected only to report late contributions.
4. Final contributions: strengths and weaknesses
As the most significant aspects can be highlighted:
- a change in consumption and problems arising therefrom. Along with the heroin, has begun to gain increasing importance cocaine. Given these changes, the NDP has taken a new document: National Drug Strategy 2000-2008, agreed by all authorities, to be adapted in the Community plans. The priority area is the prevention, from health education as most appropriate strategy.
- In the area of care intervention will continue betting on the harm reduction programs, reaching out to all groups. It aims to consolidate the Circuit Therapeutic Care and Social Integration in Drug Addiction, which aims to integrate drug addicts into society. The aim is to work together: Primary Care, Specialized Mental Health, Social Services, etc. and by all institutions with competence.
- It is necessary to promote the training of professionals specializing in postgraduate and continuing education, achieve better indicators and evaluations of programs and services and promote research. Some of the initiatives in this area go through the creation of new documentation centers and information networks among professionals.
- lines are also important to develop in the international arena, participating in European and International Organizations and improving bilateral relations.
This is to fulfill the objectives of the strategy: a participatory and inclusive:
- addressing the drug from a global perspective
- seek the generalization of school-based prevention,
- that prevents the workplace,
- normalize assistance to drug addicts
- to ensure full coverage and integration of care work,
- that enhances the research and training
- to develop the Spanish Observatory on Drugs
- which affects supply control.
The work of the NDP, the Community plans and the various institutions involved in the fight against drug addiction has been growing and many aspects have been achieved, including:
- diversity and consolidation of the healthcare network,
- policy development,
- approach to society,
- good coordination and consensus
- quality of services provided and their adaptation to the changing realities
- the willingness and interest of the professional
- the role of NGOs, etc.
Further work, focusing on:
- prevention, which involves the rigorous programs and promoting evidence-based models, working with adolescents and their perception of risk and change their stereotypes.
- social inclusion, we must raise awareness in society and involve the whole population, especially so for families. Assistance should be normalized, and we need to develop a network of social and labor integration and influence the prison population (Table 2).
- research.
You need to have adequate budget, professionals trained, motivated and well paid, a joint and consensual approach to the public network, searching for solutions to geographic dispersion. Assessment is the tool that must be always present, showing the weaknesses and helping to improve action.
Table 2: Strengths and weaknesses of Autonomic Drug Plans
STRENGTHS | WEAKNESSES | |
Andalusia | Diversity of the network, especially care | In prevention and social inclusion needs to be done; extension, disparity and geographic dispersion; need a bigger budget and research |
Aragon | The law on drugs, the rain structure, good coordination and functioning of a social reintegration (there are career opportunities and a low unemployment rate) | Achieving standardization to start a line of work with adolescents (alcohol and snuff), need more tools to work the matter of cánnnabis; sensitize society that prevention is everybody’s job |
Asturias | Drug Plan 2001-2003 reflection of what society says, supported by the Government and agreed | Place the issue of drug abuse among adolescents (change the stereotypes that have), to combat insecurity, social alarm treat face new phenomena (synthetic drugs, etc), there is more consensus and joint work between governments |
Cantabria | All plans have been agreed and made of quality services and tailored to the application, certified by an accreditation system for quality (AENOR) in all departments and areas since 1999 | Lack of time to perform all projects: “Many ideas and little time” to better involve families of drug addicts and those who are capable of becoming, ultimately, reach more people |
Castilla-La Stain | Willingness and interest of professionals in the field of prisons | Prisons, professionals are fairly saturated, with few resources, etc. Geographical dispersion, the smaller towns with great difficulty to come to the centers |
Castilla y León | Great background on the issue of drugs, with four drug plan and pioneers in the Act; considerable policy development; No major government and NGOs working in line with planning policies, high stability in time, there has been little alarm on this issue and, finally, the health care network | Very determined to return the alcohol issue of snuff, promote prevention programs based on rigorous and systematic evidence, especially in the family field (crucial and very left), lose their fear of evaluation and poor performance, to develop social and labor integration, creating a network to cooperate, have increased funding and consolidate and standardize the remuneration of professionals working in the network (especially in care) |
Galicia | High level of institutionalization of legislative production and budgetary stability. Have an institutionalized Office | The training of professionals because it requires efforts of these subjects to the training, that the Plan is considered a government plan, not a Concierge, to decrease the risk perceptions of society in general and not only young people |
Navarre | The coordination structure and the way we work, the relationship with Justice, born of many working groups being a very small region where we know perfectly well what happens mud | The way they work is very colorful and sometimes it is necessary to make known, least political demand that professionals to strengthen |
Rioja | It seeks to make changes in government most appropriate to the current situation | |
Valencia | The drug policy is highly valued, there is a recognition of work and this generates a large support structures of power, the people of the service network and prevention “are believed what they do”; “major policies or large lines that are not consensual, but born of the people, so you people leaning system » | The nature of government makes the salaries of the management apparatus are lower and this makes it impossible to have people with knowledge Deals |
Chapter 9 THE EVALUATION OF DESIGN
1. Definition
It aims to identify and explain the possible weaknesses of a program before its implementation.
It is in the logic of social intervention programs to improve the planning and quality control processes.
Social intervention is costly not only financially but with the effort, energy and time that the computer and the perpetrators involved. The evaluation of design can help make these efforts without a high cost, but if it is very expensive, worth to be spent advising resources and intervention efforts. It is a means to ensure a more prudent and efficient use of scarce human and material resources and interventions to help ensure they achieve their goals more likely.
Supposed to answer the following evaluative questions:
- Is this the best possible alternative to meet the needs to social intervention?
- Is it feasible the implementation of the alternative at that time?
2. The structure of design assessment plan
The development of assessment plan design is where you configure the evaluation model and where there are answers to the big questions of any assessment to evaluate what? Who? What? When? Table 1 shows the aspects contained in the Plan of evaluation design
- Familiar with the program for more familiar with it as they compelled to carry out a description of:
- The problem being addressed.
- The source of the problem.
- The incidence of the problem.
- Of the demographic characteristics of those affected
- In places where the problem occurs.
- The characteristics of the professionals involved
- From where does not occur or is less intense the problem.
- In the course of the problem.
- Sources of information used for intervention.
- The reason for the program will help to know what it wants to achieve, how you intend to achieve, to whom it is directed and where you plan to implement. Knowing the immediate social context is of utmost importance, since it may depend on the success or failure of the intervention , which leads us to raise the suitability of the project, whether it is the best program this social context.
- A very important issue in the evaluation of design as regards the political backing of the intervention. No matter how efficient and effective design that can be without the support of those responsible, it can hardly be implemented and to succeed.
- It is necessary to know the literature, above all, the same program evaluations or similar to ours, which will give us ideas of how to design the evaluation plan, compare the features common to draw parallels and differences and avoid mistakes that others have identified, discussed and solved. It is the state of the art or history and will help us:
- Knowing the source of the problem and what theories or explanations given by the planners of the program.
- Understanding the phenomena that are responsible for persistence of the problem.
- Know the strategies, activities and interventions that can change it.
- We must decide on each of the evaluative options:
- internal evaluation, external or co-evaluation, advice is an external evaluation, the evaluators do not have nothing to do in the design of the program or intervention
- purpose of the evaluation of the multiple purposes that may be, to carry out a formative evaluation is a priority and contributes to the development of intervention before the end. With this intention is what is making the evaluation design, improve the social program or that any other social intervention before it is put into practice. It is, therefore, an assessment referred largely to be able to go drinking, well-founded, decisions deemed necessary to restructure the components of the process to the objectives or goals that were set initially.
- if an initial assessment, process, or end: it is the initial
- if we chose a qualitative approach, quantitative, or complementarity of both. Domina qualitative perspective, since the interest lies in describing the observed facts to interpret in context, “provides a comprehensive view of the entire program” according to Parlett and Hamilton. The quantitative perspective is also present and one of the most used is the Delphi technique.
- the overall approach or partial best thing to do is make an overall assessment mole or holistic in that all parties involved subjects and phases of the program should be taken into account. Given this type of evaluation, the program responds to the principle of unity, which explains the development, implementation and results.
- Therefore, the content of the assessment consists of all the elements that make up the social intervention:
a. Objectives:
- How clearly defined?
- Should we add some?
- I have to reduce or eliminate others?
- What is considered acceptable evidence of the achievement of operational objectives of the program?
- What obstacles may prevent the achievement of objectives?
b. Users or beneficiaries of the program:
- Are they clearly defined?
- Should we expand or reduce potential users?
- Do the users with the profile that describes the program?
- What is the reason for choosing these and not others?
c. The methodology of the intervention:
- Is it well defined?
- Does it suit the objectives of the program?
- “Spoke all institutions involved directly or indirectly in the program?
- Should we amend, extend or eliminate any aspect?
d. Resources:
- Are they enough, both human and economic ones?
- Do we have to add some to greater efficiency and effectiveness in achieving the objectives?
- Does economic viability?
- Are you specified the increased resources that would be an increase in the number of users?
- Do you have approval from the participating institutions?
e. Activities:
- Are they well defined?
- Do you fit the objectives of the program?
- Should we eliminate, reduce or enlarge some of them?
- Are temporalized?
- Who performs each?
- The assessment of the evaluability of the program. is the extent to which a particular program can be evaluated, and it depends if it has been well planned and there may be barriers that impede the assessment. It can be seen as a prerequisite for the assessment and a way to judge whether the planner has done a good job during the procedure. Wholey has identified four problem areas in the evaluation of programs that maximize their difficulty and ineffectiveness:
- Lack of problem definition and objectives established and predicted results.
- Lack of theoretical base of the program (based on such actions.)
- Lack of clarity in the purposes of evaluation.
- Unclear evaluation priorities.
A preliminary analysis of the ability to evaluate a program avoids unnecessary effort and expense. Wholey raised why the need to assess the evaluability of a program as a first step formalized
- With respect to the objectives to be achieved may vary depending on the group or person from whom comes the initiative of the assessment and its more immediate claims. But from a professional standpoint, the objective is to determine the suitability of a program before its implementation if the planned social intervention meets the needs it aims to cover. In short:
- To study the relevance of the project with the social reality that is subject to modification.
- Meet the coherence and internal consistency of the program in terms of objectives, methods, duration and expected results.
- Analyze the feasibility of economic and human resources.
- Study forecasts evaluation.
- With regard to approaches and models, include the:
- Professional review approach (ID). It can be especially useful when carrying out a comprehensive assessment of the program.
- Goal-based approach (or based on targets). It is simple and straightforward but with significant limitations. It is part of the objectives and to verify that the design has been developed is best to end the problem to be correct but does not enter the efficiency and effectiveness of the program .. Although there is no doubt that the model of evaluation by objectives to decide whether this program will solve the problem
- Decision-making approach. “Considers the connection between evaluation and decision making, although several managers and how they performed. It is structured based on the actual decisions to be taken, which involves the responsible and defines the objectives of evaluating it.
Chapter 10 ASSESSMENT OF THE IMPLEMENTATION
1. Definition
Is one that allows to know whether the operation of social intervention or program is resulting as expected in a given context and time and has to do with the quality of social intervention.
One of the key aspects of a program is its implementation or implementation of the intervention program established following the diagrams in their design and their effect on beneficiaries may not get the desired effect because it has been launched as designed and wait until it is applied, can be expensive and unhelpful, therefore, the evaluation of the implementation can be very useful. It is like what others call training.
It develops over a three-step process:
- Understanding the essence of the intervention program, according to the design and documents leading up to it.
- Systematic collection of information on the key elements of the program, how to apply, how they work, etc.
- Comparison of the program in their essential parts, as it was designed and as actually working.
The result can be contrasted discrepancies between the design and implementation, in which case it is redefined to ensure its proper implementation. Apart from the collection of information, key aspects are:
- The selection of activities that constitute and define the program by separating and defining the essential accessory.
- The setting of a sample of moments, units of analysis and collection sites for information, so you have external validity.
Must answer the following evaluative questions:
- Who or who are responsible for implementation?
- Has the program implemented as designed?
- What means are used bread to ensure the program is carried out as planned?
- “The results are in line with the expected?
2. The evaluation plan implementation
The development of an evaluation plan for the implementation and subsequent implementation is of utmost importance in the evaluation process, where decisions that will shape the model (Table 1)
This type of assessment can be viewed with suspicion by practitioners when considering that question the way you work, but can make improvements, you can achieve levels of quality is increasing, which implies change and innovation in the intervention process. To ensure the success of the evolutionary process of the deployment must make at least the following conditions institutional:
- Internal motivation should be given the responsibility for implementation should lead and be involved and the evaluation team must have received institutional support and their authorities.
- Those responsible must provide administrative support and resources to the evaluation team.
- It requires the participation of all concerned, should publish their views, which will allow for better intervention.
- To be meaningful it is important to share the culture of quality of intervention and the quality of the assessment.
- Knowing well the program is one of the axioms of this evaluation. When is an internal, is done by them involved, so that knowledge of the program, but if you opt for the external evaluation familiarity with the program and know what features and activities that characterize it, is the first task due for the evaluators. Weis makes it clear what “as important as conceptualize conceptualize the desired results is the nature of the program …. ‘.
- Understanding the social context in which intervention is being implemented. Each program is conducted in a social framework that has implications for implementation. The next context is the management that funds and carries out the program, but it is also necessary to open the framework for action: programs not just socio-economic conditions of the environment fall in conflict with the aim and we must ensure that the socio-economic conditions have not changed after performing the diagnostic or needs changing. Social reality is not static, any change in the context may affect program implementation and outcomes.
- Deciding on the following options evaluated:
- internal evaluation, external or co-evaluation, it is advisable to perform an internal assessment or both at once, the goal is to correct during any anomaly detected in the evaluation (formative evaluation), being responsible for improvement in the intervention involved social. Prevent rejection of the evaluation and create a culture of evaluation is essential. Co-evaluation is defined as the combination of external and internal assessment, verifying the results of internal evaluation with the external or vice versa. Proponents of this third option, the majority say they should be complemented by external self-evaluation.
- purpose of the assessment: conducting a formative evaluation that contributes to the improvement of the program. Is performed over the life of the program and noting the validity of going all components (activities, resources, methodology, professional action, operation of the intervention, environmental impact, …) with regard to achieving the objectives . It is proposed in order to go drinking, well-founded, decisions deemed necessary to restructure the components of the process objectives. Thebasic features of formative assessment are:
- Processual. Is intrinsic to the program.
- Integral. Covers all the program elements.
- Systematics. It is a rigorous process.
- Structuring. Lets go in line with changing needs.
- Progressive. It takes into account the achievements.
- Innovative. Allows decisions constantly.
- Science. Analyzes all elements of the process to determine the role of each.
In short formative assessment can be viewed as:
- An adaptation of programs to the characteristics of beneficiaries
- A research process that takes place throughout the program’s development, allowing their knowledge through the collection of information, and adaptation to context. It involves an ongoing reflection on action and its further development and recovery.
- Dialectic interaction between all the elements: professionals, resources, activities and users.
- the overall approach or in part; as reasonable would opt for a global assessment, molar or holistic in which all the parties involved subjects and all stages of development should be taken into account as globalization. If you just want to know is the application budget, the best thing would be partial evaluation. The holistic assessment is based on the following cases:
- That a better understanding of the program or the SS could improve the opportunities and experiences of the intervention.
- That allows the institution responsible to identify and produce evidence of how far the desired quality are being implemented.
- That a comprehensive study can help practitioners to identify the effects that need attention, and their advantages are:
- It addresses the reality as it is, its richness and complexity.
- It is the only means of assessing the relationships between these elements and not just in isolation.
- Makes finding congruences and inconsistencies of the system, mismatches, gaps, lack of coordination and coherence between elements.
- if an initial evaluation or procedural, or the final
- if we chose a qualitative approach, quantitative, or complementarity of both. It is an important issue, but it will be the object of study that determined. If we opt for an external evaluation and comprehensive quantitative perspective may be the most suitable and is characterized by:
- The search and the belief in objectivity, as a result of the reliability and validity of the instruments for collecting and analyzing data.
- The hypothetical-deductive method is the procedure that can provide the necessary rigor established in the natural sciences and experimental psychology. The testing of hypotheses or the search for empirical support require statistical processing and quantification of the observations.
- Observing rigorous standards of statistical methodology:
- operationalization of variables,
- stratification and randomization of samples
- construction of instruments with sufficient degree of validity and reliability
- the implementation of structured designs,
- the correlation of sets of dimensions along different and often large populations.
- The almost exclusive emphasis on products or result ed.
- Strict control of the variables in the process, neutralizing some and manipulating and observing the effect of others.
- The structured design of an evaluation project requires the permanence and stability of the program for a long period of time.
- It focuses on quantitative information seeking ways and means objectives.
- Tendency to focus on the difference in results between the control group and experimental group to ignore individual differences and measure the quantifiable rather than identifying and tracking long-term effects more uncertain.
- The evaluation data have a specific utility for a given recipient.
- The goal may be more depending on the group or person from whom comes the initiative of the evaluation and its immediate claims, although its overall objective:
- know how he started the program or social intervention and whether there are significant differences between the actual and expected
- improve the quality of social intervention and respond to identified needs. Those responsible need to clarify evaluation itself should allow them to continue the program or if the funds are sufficient In the end;
- Report on the development of the program,
- know how decisions are taken,
- contrast the negative aspects, if any.
- approach or model. There are many approaches that can be used as:
- Model based on objectives (or goals). Some analysis of the objectives set and checks are being achieved. It takes into account productivity and accountability, without worrying about efficiency. It is a simple and direct approach but with important limitations, since it says nothing about how goals are achieved or why they have failed or unexpected outcomes measured.
- The approach ignores the objectives. The evaluator should seek to deliberately avoid information about them, so that does not lead to bias biased. Be seen in the context to see if needs are met for the intervention. highlights two limitations:
- during the evaluation, evaluators are continually requested, require special care to keep watch on a renewed evaluation.
- act similarly to that of a detective who must discover and extract key information. Through the modus operandi, the evaluator must establish a causal relationship between cause and effect.
- The focus of decision making. “Considers the connection between evaluation and decision making, although several managers and how they performed. The assessment has to be structured from the actual decisions, which often refers to the person responsible. The methodology is based on environmental changes.
- The professional review approach. It can be very useful, especially when carrying out a comprehensive assessment program. It requires peers to judge the work of their colleagues.
- The focus of cases (or negotiation. It focuses on the review of the parties that comprise the program, using the perception of the subjects involved, and the evaluator. It aims to improve understanding with the recipients. One of the limitations is objectivity. Others are to fix the limits of research and confined to the categories in which they can assimilate and understand the data.
3. Example of evaluation of the implementation of equality policy
3.1. II Plan for equal opportunities between men and women of the Generalitat Valenciana
The Plan provides for the evaluation of the same and between different types of assessment referred to implementation. To the creators of the plan that underpins the philosophy is summed up in three lines:
- Having an effective tool to develop policies and intensify efforts to introduce the concept of bread gender equality in all policies and actions of the Valencian Government.
- Act not only from the perspective of equal opportunities, but from the necessary pursuit of equal outcomes, developing positive action measures.
- Working closely with the Valencian society.
The overall content aims to develop between 1977 and 2000, of 178 Stocks, distributed in 11 areas of performance, differentiated objectives:
– Awareness and Awareness Society
– Legislation
– Culture
– Education
– Employment
– Rural
Health –
– Social Services
– Cooperation
– Company and
– Urban
3.2. Evaluating the implementation of the Plan
It is approached from two perspectives of analysis: the quantitative and qualitative. Within the quantitative perspective, the assessment of external nature is twofold: First, perform a formative evaluation or process, which includes, inter alia, analysis of the implementation, monitoring coverage and the Plan, on the other , is made an evaluation of results and impact.
The overall objective has been the knowledge of the actions that are being launched throughout the year period and that had resulted in three main aspects:
- Knowing the degree of compliance with the Plan. Noting the actions to be implemented and which not. This has raised two classifications:
- as the agents responsible for the implementation
- Areas in relation to analyze the degree of implementation of the Plan.
- Who were the groups targeted by these actions.
- Learn how you ran the Plan.
The design of the evaluation of the implementation comprises three phases:
- A detailed study of the entire Plan which allows operationalising the objectives, familiar with the plan and to develop measurement indicators. It has also been carried out a review of all planned actions were implemented.
- Development of measurement tools: a data sheet that contains indicators to measure the effectiveness and compliance. The information has been collected directly through interviews with those responsible.
- Collection of information through the data sheet, complete with document review, interviews and systematic observation and direct in situ.
- Information analysis and report writing of findings based on analytical categorization completed:
- For actors responsible for implementation.
- For areas of performance.
- Depending on the target groups.
- For the overall direction of the stock.
- Bottom line.
CHAPTER 11 EVALUATION OF RESULTS
1. Definition
Seeks to determine whether the program has achieved the design goals in terms of results and effects, in quantity, quality and extension as main feature. They cover all the effects of performance: the overall objectives and specific, positive or negative, direct or indirect. The questions to be answered are:
– Can we say that we have achieved the objectives?
– Can we qualify for intervention success or failure?
– Have you solved the needs for the intervention?
– Should I stop or continue the program?
2. The structure of the outcome evaluation plan
As in any investigation the development of the evaluation plan is essential (Table 1) and analyze its elements:
- Knowing the program, become familiar with and know the explicit and implicit social intervention. Weiss’ as important as conceptualize the desired results is to conceptualize the nature of the program. “
- know the social context takes on a special significance. Helen Simona To naturalistic inquiry is to study the programs in their contexts. In order not to attribute results to programs that are not theirs, we must understand and control each of the variables that can influence and change the problem under intervención.Weissinsiste in using evaluation designs that control contextual variables and recommended designs, such as of patched, to control possible bias. This is a response if we are able to ensure that the results are the effects of the program or uncontrolled variable
Nor can we forget the political influence. Stufflebeam considers that the assessment is at the service of managers; Stake, there are multiple recipients. McDonald, although he regards as a service to decision makers, acknowledges the influence of evaluators in distribution of resources. According to Hamilton’s assessment is a vision of society. House argues that the assessment is part of the political process. It defends the theory of justice Rawls as a commitment of the evaluator and considered moral judgments to be made in the technical concept of validity. Cronbach argues that a theory must be both policy evaluation as a way of building theory of knowledge. House rejects the position and consideration partisan and politically impractical.
In the final assessment is highlighted as part of the political process. Weiss believes that sound is a task in a political context and points out that:
- Policies and evaluation programs are the result of political decisions.
- The evaluation reports are entered into the political arena
- The assessment takes a political stance.
- decide on each of the following evaluation:
- The ultimate purpose of the evaluation of results can be:
- decide on each of the following evaluation:
- Accountability (Accountability). It refers to social performance criteria. In the social action programs that use public funds requires guarantees of reliability and efficiency, a type of evaluation cost-effectiveness.
- Summative evaluation, designed to verify the final result. Allows you to adapt, apply or reject a program and provides information to the sponsors on the level achieved. Serves three functions:
- witness the achievement of objectives,
- certify the status and capacity of the program
- and check its validity.
- If the outcome evaluation will be global, partial, or mixed. it makes sense to make an overall assessment, molar or a holistic, all parties involved subjects and phases must be taken into account, since all affect the final result. It must respond to the principle of unity as an element that explains the development, implementation and results.
- The positivist perspective / quantitativist
- framed in the naturalistic perspective / cualitativista or ethnographic.
- which follow the positivist perspective / quantitative evaluation research is called the best suited to this type of assessment, defined as the Cook and Reichardt perspective is characterized by its concern for the control variables and the outcome measure expressed numerically. Weiss says his goal is to measure the effects of a program against the goals he set to achieve, to contribute to decisions about the program. When you refer to measure the effects, refers to the methodological issues of research, measurement and control variables. These are some features:
- The search and the belief in objectivity, is the result of the reliability and validity of the instruments for collecting and analyzing data.
- hypothetical-deductive method is the procedure that can provide the necessary rigor. Require statistical treatment of data and quantification of the observations.
- Observing rigorous standards of statistical methodology: operationalization of variables, stratification and randomization of samples, construction of instruments with sufficient validity and reliability, application of structured designs, correlation of sets of dimensions.
- The almost exclusive emphasis on the outcomes or results.
- Strict control of variables, neutralizing some and manipulating and observing the effect of others.
- The permanence and stability of the program for an extended period.
- It focuses on quantitative information seeking objectives instruments.
- Tendency to focus on the mean difference between the control group and experimental group to ignore individual differences and what appears to measure easily quantifiable and immediate results, instead of identifying and tracing the effects.
- Fits a theoretical perspective of evaluation research. You should be concerned to check the extent to which objectives have been achieved.
- The data have a specific utility for a given recipient ..
- The criteria that best define the objectives
- are the achievement of objectives
- the social,
- utility,
- relevance
- and the level of satisfaction of users.
- The criteria that best define the objectives
Conduct a study to evaluate means providing information to allow work to avoid being judgmental interests victim of political, social or economical implicit in most social intervention programs. The objectives to be covered by the assessment may be, according to Weiss
1. Continue or remove the program.
2. Improve their practices and procedures.
3. Add or discard strategies and specific techniques.
4. Establish similar programs anywhere else.
5. Allocating resources among competing programs.
6. Accept or reject an approach or theory for the program.
But not only they, among other Desta can:
- Identify the strengths and weaknesses in the programs.
- Know the processes of implementation and results of the intervention.
- Knowing the views of service users.
- Knowing the extent to which it applies as planned.
- Detect or unexpected side effects deriving from its application
- Resolve conflicts, as it incorporates new information and expands the possibilities of negotiation and its effects of reducing complacency.
- Improve practices, add or discard specific strategies and techniques.
- Knowing your efficiency and effectiveness.
- Possible approaches:
- the systems analysis is the best answer. It focuses on the measurement of outcomes to measure the efficiency of the program. The data used are quantitative, the measures relate the correlation analysis and other statistical techniques. Requires a good experimental model and its main objective is to be an assessment as objective as possible, an appraisal firm whose results do not change, even when made by other reviewers.
- Possible approaches:
The main features are:
- The evaluation focuses only on the results, particularly in comparing the cost-product programs.
- They are based in micro-economic theory of social service areas
- Adopted as the methodology of positive science.
- Comparison based on the use of experimental designs or quasi-experimental. Among the most common are the time series, that of pretest and posttest and the patch as it is called Weiss.
- The data with which we work are quantitative.
- The main indicators are the objectives of the program.
There are three models that stand out: the Suchman, Escotet and Stufflebeam.
- To Suchman, evaluation is a type of applied research and logic must follow the scientific method for establishing cause-effect relationships, the appropriate method would be experimental, because it can detect these relationships between the program and results. Evaluation research is intended to demonstrate the value of social activity. Add as targets for an evaluation:
- Analyze the motives / reasons for success and failure of programs, measuring success and achievement of objectives.
- Highlight the basic philosophy of the intervention has been successful.
- Redefir the necessary means to achieve the objectives.
- The evaluation begins with a specific value, anything that has a particular interest, as being good, bad, desirable … and then defines the purpose and criteria are selected to assess the achievement of the target.
- The next step is to identify an activity and its implementation. Includes the extent to which the Roadmap has reached the predetermined aim. Based on this evaluation, a trial issue of whether the activity has been useful.
The draft assessment is made with series of hypotheses, identifying variables and explaining the causal relationships between them.
- Escotet builds its model based on three elements: Entry, operations (processes) and products. Use information taken from the three. Evaluation is the process of feedback through the use of experimental design in order to collect empirical data on the relationship between inputs (stable elements), processes (implementation) and outcomes. Its advantage is that it excludes almost all the limitations of models based on objectives. Its limitations in not considering the contextual variables.
- This limitation is overcome in the model of Stufflebeam CIPP The novelty is the introduction of a new component: the context, which includes the social economic aspects.
- Methodological decisions. Mention is made to the results, both to their presentation, such as who will be targeted. Should include the discussion and interpretation as well as suggestions for future implementation. In terms of targeting, is to be answered:
- Who should be informed of the results
- Where will the results published
- Both the analysis of results as its extension should be agreed at the Evaluation Plan. Confidentiality is a basic requirement if it is determined.
- An assessment of quality must meet the following characteristics (Stufflebeam)
- Tools, used to a better understanding of reality and to improve social intervention.
- Viable, which can be carried out without much resistance.
- Ethics, respecting the rights of those involved and meeting commitments
- Accurate, use the tools necessary to guarantee the validity and ability.
3. Example of evaluation of results: the Equal Community Initiative in Andalusia
3.1. Introduction to the Community initiative EQUAL
Conducted by the Regional Development Institute of Andalusia, the IC Equal, aims to promote new ways of combating all forms of discrimination and inequality in relation to the labor market through institutional cooperation at local and transnational interventions employment. With the intention that its results and evaluations as benchmarks for improving Employment Policies are articulated on the whistle of the European Employment Strategy:
- employability,
- promoting entrepreneurship,
- adaptability of firms and workers,
- and equal opportunities.
Development Partnerships were formed with the participation of government, representatives of social partners, financial institutions, universities, research centers and private entities, which should establish a work program with two or more transnational groupings of other States. The activities respond to the following types:
- Assistance (career guidance, business advice, training, work experience, support for the generation of intervention activity and work schedules);
- structures and systems (training of trainers …)
- accompanying measures;
- management, monitoring and evaluation;
- expansion of eligibility,
- transnational activities.
Equal focus efforts on three areas:
- the objective of the project
- the context in which the proceedings arise
- and the work process in place.
The clusters formed should present their proposals on the basis of an analysis of inequality and discrimination within their territories or sectors in relation to the labor market. Depending on the needs, the program of work should be assigned to a subject area and incorporate an integrated approach aimed at mitigating the causes of exclusion and their consequences.
There are other principles that should be considered covered projects across the board in its planning, management, monitoring and evaluation:
- Equal Opportunities
- development of information society
- promotion of the environment ..
3.2 considers methodological
The report covers the 21 Equal Andalusia and is the result of joint analysis and cross-monitoring each project. The final report has been compiled information from the following sources:
- Equal programmatic documents and the main guidelines for implementation, monitoring and evaluation of their projects.
- The semiannual reports by Equal GSE1 partnerships.
- The monitoring reports of the 21 individual projects.
- The final individual tracking reports.
- The final reports and monitoring and evaluation reports.
- The websites of the projects and participating entities.
- The catalog of 193 made products and materials for projects.
The information collected by the computerized tracking system was insufficient and Equal GSE for that reason, we designed tools to gather information as 84 interviews for the qualitative elements, and a questionnaire or a follow-up to the quantitative aspects focused script as follows:
- The intervention strategy and processes with the aim of extracting the program theory, ie the assumptions underlying them and the process designed.
- Management System, Monitoring and Evaluation of the EQUAL CI in Andalusia (GSE).
- Management mechanisms articulated partnership, in order to define the classes and relations that have influenced management.
- Compliance with the guiding principles and priorities of the ESF Equal in order to assess their inclusion in the strategy of the project.
The analysis highlights the work is to know the influence of actions in the field of enterprise and job creation through the construction of two simple linear regression models that resulted in highly reliable data. Among the explanatory variables the number of companies created and jobs created were considered:
- The projects themselves
- Number of people oriented, mentored, trained and in practice.
- The exercise or non-training the unemployed.
- The number of members of partnerships.
- The percentage of women, immigrants, disabled people and excluded groups benefit from the projects.
- If the number of associations exceeded, or not, 25% of all partners.
- Environment Variables projects
- The target population
- Unemployment rates
- The rural or urban,
- The rate of growth of business.
3.3. Follow-up assessment
He joined an evaluation process at the start, halfway and completion. To meet this responsibility, there were two alternatives:
- undertake monitoring and evaluation in-house and independent;
- and / or hire an outside agency to assist the evaluation process.
In 81% of the projects were contracted services for monitoring and evaluation. In 53%, external evaluations were independent of the inmates, while 33% used external evaluation data and reports of the internal. The process followed has been characterized as a continuous monitoring of activities and a final evaluation.
3.4. Evaluation methodology
To monitor the activities developed software, which facilitated the collection of information and generated reports for monitoring and basic statistics of users and data on business projects.
The instruments used were the collection of information sheets and the definition of quantitative and qualitative indicators for monitoring and to a lesser extent, to measure the achievement of targets and indicators on specific issues such as gender impact. Has also been used questionnaires and interviews with technical staff to projects and / as beneficiaries.
For the collection of information that have been used
- Continuous observation.
- Data collection sheet.
- Follow-up visits.
- Working groups for technical equipment.
- Interviews:
- Technical staff.
- Organizations and the territory.
- General population. and personal users.
- Questionnaires to:
- Technical staff.
- Organizations and planning.
- General population.
- Personal user.
- Focus groups with:
- Technical staff.
- Organizations and planning.
- General population.
- Personal user.
There were difficulties in differentiating the tasks of monitoring and to establish the issues and criteria of value. The seminars and workshops organized propiciaronla overcome these difficulties.
3.5. Conclusions of the evaluation processes and recommendations
Performing 21 parallel and coordinated evaluation processes has been a great contribution to building a culture of evaluation and raise their institutionalization and internalization. Among the main lessons learned, skills acquired are stated on the evaluation.
Any intervention by the Equal characteristics has led to the evaluation teams have faced a complex world:
- collective focus on difficult to reach and excluded;
- multi-dimensional policy affects different
- developed in areas geographically dispersed and disparate;
- interventions that contain multiple objectives and diverse nature.
- inclusion in the management of multiple agents, “encouraged” to cooperate and work in a coordinated manner through networking.
It was decided to use participatory approaches and qualitative techniques, involving the partner organizations and the evaluation focused on processes, to meet the objectives: to identify and disseminate new ways to design and implement policies.
The experience has yielded a number of future recommendations to incorporate evaluation processes of this nature, among which are the following:
- Incorporate the assessment from the design and allocated the necessary resources for participatory approaches and using appropriate techniques.
- Conducting training in evaluation and dissemination of the results led to the assessment teams and project management.
- Complement and coordinate monitoring and evaluation systems internal and external sources is not saturated, forming joint evaluation teams.
- Establish testing protocols and coordination mechanisms that allow aggregation of information, consistent and reliable.
Chapter 12 options evaluated TO CONSIDER IN THE EVALUATION OF SOCIAL INTERVENTION
When deciding to evaluate, we must take a series of evaluative decisions can be grouped into five blocks (table 1), each involving a number of options are those that are setting the type of evaluation is desired. The evaluator will select between different alternatives to determine their evaluation plan.
Table 1. Evaluative decisions in the evaluation of social intervention
1. Mode 2. Purpose 3. Level 4 analysis. Perspective
“Internal.” Accountability. “Global.” Quantitative.
“External” Compare. “Partial.” Qualitative.
“Mixed.” Formative. “Board.” Complementary
-Summative. mentarity.
5. Approach
1. Rationalist 2. Naturalist
1. Establishment of that part the decision to evaluate
The decision to assess can take:
- The administration of the program depends on who
– The management of the program.
- Professionals.
- Users
– Private investigators.
Grouping these classes the initiative of the assessment may arise from:
- Programme itself, at the request of establishment or a member. They can be by:
- the desire to improve the program’s activity,
- the need to solve a problem,
- the desirability of clarifying a situation
- or discuss some aspect of the program.
- O spurious reasons: to practice a settling of scores, individual advocacy and support for decisions to impose personal views or
- From the top to the program, born of concern for control, desire to compare the effectiveness of a program, to provide a way for institutional improvement and accountability.
- Researchers who intend to pursue any study. A private investigator may raise the desirability of making an assessment, in order to make a doctoral thesis or other research.
Depending on what you choose to evaluate the establishment is involved or not, there are three types of evaluation:
- Internal
- External (outside the program, as experts in evaluation)
- Mixed-co-evaluation (combination of the two, first internally and then externally.)
The three options are viable, but more desirable would be to make a joint that allows external or distance from reality. The external guarantee an optimal level of objectivity, which is advised to commission independent evaluation teams which gives legitimacy to the conclusions, however, managers may not support these and not to agree with the consequences. Represents an extra cost to the program and usually meets resistance among the actors of the program.
For example, if the professionals who decide to assess, as a party involved in the program, the type of conduct will be internal assessment, if the administration decides otherwise the assessment will be external.
1.1. Internal or self-assessment
The process by which the program itself is analyzed in order to:
- See if you’re doing what is proposed.
- Check whether they are getting what is wanted.
These objectives can be frustrated because of problems that hamper self among them:
- The resistance to being observed or be subject to evaluation.
- The individualistic nature of some professionals.
- Lack of professional motivation.
- lack of time.
- Lack of technical support.
- Lack of credibility.
- Delay of the moment.
- Concealment of substantive problems.
- Impatience for results.
- Fear of punitive power of the evaluation.
For Simons one of the most important assumptions that underpin the assessment, is that professionals and programs are always self-assessed, are preparation and knowledge to analyze, while not yet commonplace.
Not if the assessment is made by those involved, it means that the initiative has to start from them can be promoted by the agencies:
- Internal: the stakeholders are those who feel the need to know reality of the program to take the necessary decisions.
- External: this need is encouraged from individuals or institutions with interest and / or liability, as in EU funded programs, to be the administration that stimulates Community for internal evaluation.
In the two dimensions of internal evaluation are oriented • staff development and the consequent improvement of the program, renuende that internal evaluation bodies raised from inter-a is better to have the support of those involved in ei process.
1.1.1. Advantages and disadvantages of internal or self-assessment
There are some criteria that determine:
- The knowledge and understanding to assess and associated problems will be maximized. Also the danger that the lack of emotional distance and commitment of the people involved in making it a self-justification. This fear is announced by Nuttall, who believes that internal evaluation can be used to reinforce mediocrity rather than promoting change.
- If the initiative is involved and is a freely available, avoid rejections, misunderstandings and concealment of information. The benefits may outweigh the disadvantages, particularly:
- Its impact on the improvement of professionalism.
- The ability gives programs to solve problems
- The value and reliability of the data provided and the involvement of staff.
- It is a reflection and analysis and serves as a feedback.
- Can be considered as a process of action research.
1.2. External evaluation
The process by which external actors to assess their program performance. It can be promoted:
- within the program, as a result of the desire of those responsible for accreditation or receive accurate information about the state of the program, its weaknesses, conflicts, or dysfunction or on the positive dimensions
- outside the program, to verify compliance levels, knowledge of the situation or levels of achievement of minimum goals.
The external evaluation may be total (for all aspects of the program) or focus on some plots or subsystems, can also be of the whole system is social services.
1.2.1. Advantages and disadvantages of the external evaluation
It seems that the main advantage is to trust the assessment to prosecute persons whose freedom is guaranteed, but objectivity is not always guaranteed.
In the case that the recruitment of specialist run to account those responsible, can be used to achieve a more remote and more comprehensive and technical quality, however, the lack of independence persists when customer satisfaction may be the only way to obtain a new contract. It also presents the risk of not getting to know the reality of the institution for issues such as:
- that evaluation becomes, for professionals, in a control mechanism.
- There may be pressure to get information quickly and biased.
- Information is required at times and in inappropriate ways.
- It determines the observer to provide meaningful information given.
- It misrepresents reality so that the evaluator have a distorted view
- We use the information in a partisan, arbitrary and interested.
- Direct the evaluation is intended to reach places, people, problems and situations of interest.
- The reluctance of assessed to the evaluators.
- When the data requested from the outside, it’s easy to give deformations which alter the validity or reliability.
- The advantages of internal evaluation, become disadvantages of the external.
1.3. Mixed or co-evaluation assessment
It is the combination of external and internal evaluation, comparing the results between them. Most experts are in favor of this option and report that the outside should lead to the implementation of common systematic institutional self-evaluation mechanisms.
Martin considers as conjugates of external and internal interests of the institution, referring to the control of program managers of their obligation to report and be accountable to the society, but should be conducted democratically for the benefit of the institution and professionals.
This pathway appears to find the solution to all problems in the other, but its implementation brings disadvantages of both and more time and resources.
2. Purpose of the evaluation
The reasons that can lead to judging the social program evaluation are many and very diverse character. We present four general objectives:
2.1. Purpose of accountability to society (accountability)
“Accountability” refers to social performance criteria. Since public funds are used, the political, social and technical demands guarantees of reliability and efficiency, and therefore requires some evaluation of the cost-effectiveness. It is not enough to explain how money is spent (fiscal accountability), it must be justified in terms of results
Among the defenders, they are, and Kogan M. Scriven who introduces a new nuance to consider it as a condition and requires accountability.
J. Elliott defines it as the institution’s acceptance of responsibility for evaluate itself and distinguishes three types of accountability:
- social,
- economic
- and professional.
As an example, the assessment developed by Schrader rural development program in the region of Land to evaluate the efficacy and broken down into specific objectives:
- Measuring the overall effectiveness of the program in relation to its physical realization.
- Knowing the change in the behavior of beneficiaries
- Estimating the change in the behavior of entrepreneurs.
- Knowing the behavior modification of other citizens.
- Knowing the involvement of different authorities.
- Discuss cooperation between governments.
- Measuring the financial and administrative implementation and analysis of socio-economic development.
2.2. Compare social programs
Often used “assesment” as synonymous with evaluation, but is sometimes used to refer to processes aimed at the quantitative and standard tests, “testing, though ome suggest that a trial is less and more a process as other types of evaluation, but according to Scriven, author, “is a type of assessment in which the trial is based on numerical results.
The comparative method is best suited to provide practical and theoretical knowledge of the processes and outcomes of programs in different contexts or in the same, when s compare several. Beltrán said that “it is a consequence of the diversity, the variety of ways and processes, structures and social behavior, both in space and in time, leading to consideration of two or more objects that have something in common and something different, “provides a look not egocentric and shows how the intervention is changing to faces different challenges, what was the answer or what problems were found.
Some of the advantages identified by Hantrais highlight, as it allows to understand situations in different contexts and identify “weaknesses” in the knowledge, solutions and new perspectives.
For its purpose and benefits can be a useful tool in the evaluation of social programs in order to compare processes or outcomes.
2.3. Understanding the evolution of social intervention (formative assessment v a)
Named after Scriven, contributes to the development of a program is performed along the implementation process and through it we can see the validity of all components (activities, resources, methodology, professional action …) with regard to achieving objectives, in order to be taking the necessary decisions to retrofit:
- The basic characteristics of formative assessment are:
- Processual. Is intrinsic to the program.
- Integral covers all elements of the program.
- Systematics. It is a rigorous process.
- Structuring. Lets go in line with current needs
- Progressive. It takes into account the achievements.
- Innovative. Allows constantly making new decisions
- Scientific. It analyzes all elements of the process as part of the system, to determine the role of each.
In short formative assessment can be viewed as:
- An adaptation of programs to the characteristics of beneficiaries
- A research process throughout the program, allowing their knowledge by gathering information and adapting to the context. It is an ongoing reflection on social work and its improvement and recovery.
- A dialectical interaction between all elements of the program: professional, resources, activities and users.
2.4. The final results of the program or social intervention (summative evaluation)
Is to test the effectiveness of the results of a program and can be considered complementary to the training. Allows you to adjust, continue, implement or reject a program and provides information on the level reached in relation to the objectives. Performs three major functions??:
- witness the achievement of objectives
- and certify the status and capacity of the institution responsible for its implementation.
One example is the evaluation by Luengo, MA of the “Life Skills Training ‘, whose overall objective is to understand its impact on the final year of E. Primary and specific objectives are:
- To evaluate the effectiveness of the program to prevent the onset of drug use (snuff and alcohol) or delay the age at which it starts.
- Check if you have effects on reducing the frequency of consumption.
- Assessing their impact to avoid or reduce the frequency of antisocial activities.
- Evaluate its effectiveness on consumption-related variables such as knowledge, attitudes and intention to use.
- To assess the effects of intervention on the personal and social competence of adolescents.
Staker’s metaphor, can serve to clarify differences between formative and summative assessment: When the test cook soup, formative assessment, when testing the client, summative, and there is no possibility of change. They are two different conceptions of assessment but this is not contradictory or incompatible. Both have been implemented in the DCB of the reform of non-university education.
3. Level of analysis in the evaluation of social services
Two approaches or levels of analysis in the evaluation of social services regarding the evaluation of programs, centers or set of social services although some authors include a third party, called mixed.
3.1. Molar level, global or holistic molar
Consider the evaluation as a whole in which all parties, individuals and the phases must be taken into account so globalizing and is based on the following assumptions:
- That a better understanding of the program or the Social Services could improve the opportunities and experiences that are envisioned.
- Studies and systematic reviews that allow the institution responsible for determining to what extent has implemented the desired quality.
- That can help professionals to identify the effects that need attention.
Furthermore, when considering social services and organizations, keep us;
- A plurality of actions to be carried out in collaboration between the various members.
- A comprehensive and systematic occurrence of these actions, exclude other causal or incidental.
- A certain level of provision in the conduct of members.
- A goal or final elements of the organization justifies its existence, and which are driven and directed the actions that take place therein.
The role of organization has to respond to the principle of unity. An example would be the evaluation of the plan of a public institution or a Social Policy Ministry.
Can be defined from a novel approach in applying the ecological model of social services for the study and understanding of organizations, is conceived as an ecosystem, “all agencies of a particular environment, paradigm, which bases the assessment on the following premises:
- Equally emphasizes all elements of the ecosystem.
- The context becomes a determining force.
- It emphasizes the nature of relations and exchanges of a psychosocial nature.
- Attends to the processes taking place within the community.
- The “being” of the institution matters more than the “duty” to be.
- The study of the roles of actors, creates a unique perspective of interpretation of reality.
- There are rules that are negotiated or imposed in formal and informal structures.
- The institution created situational indicators.
- There are connections with the external environment that determine the dynamics that occurs in social services.
3.1.1. Advantages and disadvantages of the level of global analysis
Although there are many drawbacks, the advantages can be overcome, primarily:
- It addresses the reality as it is, in all its richness and complexity, recognizing that behaviors, attitudes and relationships of a subsystem are conditioned by those within which it fits.
- It is the only means of assessing the relationships between elements.
- Makes finding congruences and inconsistencies of the system, mismatches and lack of coordination and the coherence.
The most significant drawbacks relate to:
- The methodology is very complex since it involves modeling methodology capable of incorporating the network of relationships and find data analysis procedures to the face.
- Involves incorporating large numbers of variables that are addressed from different data sources of production.
- Each variable needs an extensive network of indicators to gain in validity and reliability, resulting in an almost unmanageable accumulation of data.
- The difficulty of getting data sources, dedication and effort required to answer them.
- Do not forget the participation of all concerned, which is an arduous and difficult.
3.2. Level of partial analysis or molecular
Define the assessment assumes an aspect or part of a whole.
Social Services as a social institution, is a system, a number of factors including pluriform relationships exist. The partial evaluation would, in the assessment of some of the subsystems that compose it, as can be, the subsystem program.
3.2.1. Advantages and disadvantages of the level of partial analysis
The main advantages are that it allows for a more comprehensive, reliable and valid, a better measure of the variables to be less.
Of the drawbacks should be noted that:
- There is danger of losing the reference and explanation of the process inclusive.
- Lose sight of the consideration as an ecosystem service that explains the overall operation.
- Decontextualized aspect evaluated the overall system, which does not explain why certain phenomena
3.3. Mixed level of analysis
It is a combination of the two levels of analysis, characterized by a general knowledge and through the same detection (diagnostic function) levels, roles, attitudes, relationships or behaviors that indicate difficulties and conflicts. A specific study of the difficulties enables improved decision making, and partial evaluation, located in such areas.
4. Perspective that is part of the evaluation
Paradigm, according to Kuhn, is a global concept similar to “world view”, “philosophy” or “intellectual orthodoxy.” Prescribe the problem areas, research methods and standards of solution and explanation bread. He defines it as “interrelated set of assumptions about the social world that provides a philosophical and conceptual framework for the organized study of the social world.”
Weaver says that a paradigm serves as a guide:
- identify significant problems of one discipline;
- develop explanatory diagrams (models and theory)
- working to establish appropriate criteria (methodology, instruments, type and form of data collection)
- provide the epistemological basis from which knowledge can be built.
The definition of Kuhn, two perspectives can be considered in the evaluation:
- Qualitative perspective, their interest lies in the description of observed facts to interpret in the context in which they occur to explain the phenomena, its basis is the ethnography and sociology.
- Quantitative Perspective, is characterized by its concern about controlling variables and measuring results expressed numerically. It is based on psychology and the sciences.
Reichardt, Cook and others include eleven different paradigm and show that it consists of a link with a particular research method.
Differential criteria between qualitative and quantitative perspective
Qualitative | Quantitative |
Qualitative methods.
| Quantitative methods.
|
With reference to the assessment, Pérez Gómez, analyzes the characteristics of the new approaches:
4.1. Quantitative perspective: the nature
- Search and belief in the objectivity, the result of the reliability and validity of the instruments for collecting and analyzing data.
- The only procedure that can provide objectivity is the hypothetical-deductive. The testing of hypotheses or the search for empirical support for theories, require statistical treatment of data and quantification of the observations.
- Observing rigorous standards of statistical methodology:
- operationalization of variables,
- stratification and randomization of samples
- construction observation instruments with sufficient validity and reliability,
- implementation of structured designs,
- correlation dimensions of different sets and large populations.
- The almost exclusive emphasis on the outcomes or results.
- Strict control of intervening variables. Requires strict control of the factors involved, neutralizing some and manipulating and observing the effect of others.
- The design structure requires the permanence and stability of the program for an extended period regardless of changing circumstances.
- It focuses on seeking quantitative information by objective instruments.
- Tendency to focus on the mean difference between the control group and experimental, to ignore individual differences and what appears to measure easily quantifiable, rather than identifying the long-term effects more difficult to analyze.
- The model fits into a perspective of evaluation research. Should seek only to verify the extent to which objectives have been achieved.
- The data have a specific utility for a given recipient.
Criticism highlights the view that the nature of social phenomena recommended a qualitative approach, because their complexity can not be captured through experimental designs. Beltrán said that ‘the sociology that is modeled on the sciences of nature betrays its purpose ‘. Ibáñez said that “the statistical technology (distributive) shall in principle to place subordinate language technology (structural).
The technology poses problems in quantitative measurement of social phenomena and specifically in the evaluation:
- The rating scales, questionnaires that require numerical assessment … not give way to the explanation of reality.
- The complex realities numerical attribution is a risk of inaccuracy and misrepresentation, aggravated by the appearance of objectivity. The complexity of Social Services, with ideological, political, economic, psychological and pedagogical, is obvious:
- In the codification process, as they plot a reality and is scored numerically a qualitative fact, from the subjectivity of the examiners,
- In the decoding process, since it gives meaning to the numbers assigned by the assessor.
- Using patterns quantified for comparison is arbitrary and not quite properly be regarded as a program “better” to those who achieve high scores.
- The investigator must decipher encrypted what appears in the reports. Only by understanding the keys can make a correct decoding, but the operation must be made through speculation.
- The evaluation of social programs is not easy to parcel out the reality with the generalizations of the experimental character designs because of their peculiarities and idiosyncrasies. When operated with the data numbers are decontextualized and is deprived of meaning.
4.2. Qualitative perspective: features
- Objectivity is always relative and can not be objective.
- Understanding a situation where individuals interact with intentionality and meaning Subjective required to consider the different views and ideologies through which interpret the facts.
- The assessment can not be detached from the values understood, can not be solely on the contrasting results aseptic predetermined targets, observable and measurable.
- The objective of the evaluation is not restricted to overt behaviors or immediate, the long-term side effects are equally or more significant
- The field of intervention products should be extended and not restricted.
- The evaluation process focused on trying to capture the uniqueness of the specific situations that can be held accountable for what happens.
- Requires a methodology sensitive to the differences, unforeseen events, to change and observable manifestations and latent meanings
- It incorporates a set of techniques, guidelines and budgets of the methodology ethnological field research.
- Strictly structured design beforehand can not be an appropriate instrument, requires a flexible design allowing a “gradual approach” in particular areas that appear most significant in the course of the investigation.
- The purpose is to understand the object of study through the interpretations, interests and aspirations of the actors, to provide information that everyone needs to understand, interpret and intervene, as appropriate.
- The report must respect the privacy of all involved.
Filstead distinguish the differences between qualitative and quantitative paradigms and argues that the quality is the one that yields more appropriate results for evaluation research. Among the evaluators, is shifting from an emphasis on quantitative methods to a qualitative assessment techniques.
4.3. Complementarity of both perspectives
Both have supporters and detractors, the latest trends but are committed to their union. In the 1940-1960 controversy touches roof quality / quantity and ditch in favor of the option quantity, while maintaining quality for the role for the exploratory stages of research.
In the sixties there were two circumstances that would have a huge impact, as illustrated by Alvira:
- The crisis of justification and the separation between context of discovery and context of justification. Anguera see that the Vienna Circle had entered into decline, and questioned the possibility of verification of theories, pouring strong criticism from alternative positions, such as analysis Achinstein skeptical descriptive and Kuhn’s paradigmatic approach.
- A major technological advance in the measurement, is lie, and mathematical analysis of the data, which resulted in the deployment of various procedures for the treatment of qualitative data
This involves the review of both perspectives, shortening the distance and for their complementarity. Alvira, Anguera, Alvarez, say it has been assumed that both are necessary, and can function together in a complementary. Cook, T. Reichart and written phrases like “there is nothing, except perhaps the tradition that prevents the researcher mix and arrange the attributes of the two paradigms to achieve the most appropriate combination to research problem and the means with which the account” , etc.
The advantages of simultaneous use, summarized in the following points:
- Evaluation research has multiple purposes to be served under the most demanding and often requires a variety of methods.
- Used in conjunction, the two methods can invigorate each other to provide a perception that they could not achieve separately.
- No method is free from prejudice. One can only reach the ‘truth’ underlying by using multiple techniques and the corresponding triangulations.
It should be noted the obstacles that arise in the sharing, highlighting:
- It can be expensive.
- Takes too much time.
- That researchers lack sufficient training in both types.
- The mode of or accession to the dialectical form of debate.
But there are so insurmountable opt for this claim. It is unnecessary dichotomy between them and to meet the demands of research, evaluation as effectively as possible.
5. Assessment approach
House groups them into six basic types ideal approach to analyze reality. The panorama of approaches, called models, the evaluation of social action in the nature of diversity. They show the variety and evolution followed by the discipline through the different models, starting with the positivist perspective, to reach the methodological diversity that gives access to the actual size of the object that has to be addressed in each case ( BERTRAM 1989).
- Behavioral objectives-based approach (or based on targets). Some analysis of the objectives set and see if they have achieved. It takes into account productivity and accountability, without worrying about efficiency. Widely used in education and public administrations. It was promoted by Tyler in 1950.
- Approach ignores the objectives; is a reaction to the former. Scriven argues that the evaluator not only has to base its assessment on the objectives, but try not to learn about them, to avoid bias biased. Scriven’s concern is the reduction of bias.
The evaluator must investigate all the results, since many of them are unexpected side effects, which may be positive or negative. Of all the approaches, it is the least used. For many researchers find it hard to imagine where evaluation criteria beyond the scope of responsibility.
Given this, Scriven seeks to develop the concept of ‘necessity’, that is discovered through assessment, compared to the tastes and desires. The independent evaluation of the objectives is based on the analysis of user needs and not on the objectives of those responsible.
- Decision-making approach. Bears in mind the connection between evaluation and decision making. The assessment has to be structured from the actual decision, which alludes to the Officer. Stuffelbeam for evaluation is the process of defining, obtaining and providing useful information for possible alternative decisions. Identifies three areas of decision:
- homeostasis,
- incrementalism,
- neomovilismo.
Four types of situations:
- planning
- structuring,
- implementation, recycling.
Three stages in the evaluation:
- demarcation
- procurement,
- communication.
Four types of evaluation
- context
- entry (input)
- process
- product.
- Systems analysis approach, focuses on the outcome measure, the product is to measure efficiency. Data used are quantitative and performance measures related to processes using statistical techniques. Requires a good experimental model and its central goal is to be as objective as possible, involves making an assessment whose results do not change, even though the conduct others. The main problem is that falls into a reductionism that is not consistent with human complexity. Economists and managers are most inclined by the model.
- Professional review approach (ID). The professional review has been a fundamental means of assessment. It assumes that a professional judge the work of their colleagues. It can be very useful in evaluating the intervention program design and implementation.
An interesting example is the evaluation of university departments or units or the courts evaluation of medical specialties. In all, are professionals who evaluate them.
- Case studies (negotiation). It focuses on the revision of parts of the program, using the perception of those involved in it and the evaluator. To House “the aim is to improve understanding of the evaluation takes the reader or recipient, showing, above all, how others perceive the program.”
There are many decisions to make when assessing conditioned, in most cases, the objectives of the evaluation and the philosophical education of the evaluator.