Institutional Evaluation: Challenges and Best Practices

Institutional Evaluation: Key Considerations

Professor Sancho Gil highlights one of the most serious problems posed by evaluation: the issue of a trial on an event or condition known prior to the trial. Other authors point out the challenges to evaluation, noting that governments often make serious mistakes and have shortcomings when evaluations are performed as a legal obligation, resulting in poor shared reflection. When performed at the end, evaluations fail to serve as a potential generator of change and end up as mere bureaucratic documents.

The issue of impartiality is questioned because, according to Bater, evaluation is rarely neutral, as it involves the interests of one or more groups of people. This issue could be addressed in alternative ways.

Canal and Noval suggest the following reasons why the public system is reluctant to embrace evaluation:

  • The academic condition of almost all its workers, after passage through so many filters, causes resistance to any further inquiry that could question their status.
  • The rarity of self-assessment exercises, which are seen as a fad of low virtuality, and the lack of practical and operational programs of professional assessment and inspection services.

There are other factors, such as unpopularity and the limited capacity of government to consider the contributions made on-site by its members. There needs to be a preliminary step to the implementation of any institutional evaluation process, creating a climate favorable to it and sensitizing each of those involved.

Evert Verdung’s 8 Key Issues for Evaluation Implementation

Evert Verdung considered the following eight issues most relevant to the implementation of an assessment and to ensure its successful resolution:

  1. Purpose of the Assessment: For what purposes is it initiated overall?
  2. Who Should Carry Out the Assessment: How should it be organized, depending on whether it is internal or external?
  3. Intervention: How can a culture of evaluation be created? Address more lightly the issue of describing the intervention, as these are people who are completely familiar with the program. However, in the evaluation of side effects, it may be appropriate to consider the intervention in terms of media purposes.
  4. Implementation: What pressures or commitments does the Evaluation Plan address? In this plan, evaluators follow the program from its source to the point immediately before the outputs.
  5. Results: What are the outputs and the immediate and late results of the intervention? Are there only outputs, or are there also consequences, or only consequences?
  6. Variables: What causal factors explain the causal forces operating in the result?
  7. Scales: This problem is common in all evaluations because there is no consensus for the use of different criteria to gauge the importance and value of the intervention. This includes measures of measuring elements, which are strong points in the intervention that may provide for a retrospective assessment of results.
  8. Use: How should the evaluation be used, and how is it actually used?

Stufflebeam and Shinkfield’s Classification of Evaluations

Stufflebeam and Shinkfield classify evaluations into three categories:

  1. Pseudoevaluation: Tries to be misleading through undercover investigations or studies based on legal relationships.
  2. Cuasievaluaciones: Give answers to certain questions of interest without determining its value, such as objective-based studies and experimentation, and the comparison of responsibility or program and information systems studies.
  3. Real Reviews: Attempt to examine the value and merit of an object, such as studies focused on the client or the orientation of decision- and policy-makers, and those based on the consumer.

Difficulties in Understanding External Evaluation

External evaluation is conducted by those external to social intervention. Two situations can arise:

  1. If the initiative comes from the senior management of public administration, it may be perceived by those involved as a different control mechanism for political or economic reasons. Those involved may consider the evaluator as a representative of the hierarchy from whom they must be protected. This can create a distorted picture of the reality of the program.