Action Research in Education: A Comprehensive Guide

Features:

  • Restrict the meaning of the general objectives.
  • Accept only one interpretation.
  • Involve taking options off the possible interpretations of the overall objectives.
  • Are formulated in terms of measurable observable manifestations, are equivalent to questions.
  • Facilitate a more structured social project.
  • Can be broken down for analysis.
  • Generally well-formulated objectives can better the coherence of the whole category.

3.4. Approach to the Scenario-Action

One hypothesis involves a statement of the alleged cause of the problems in choosing a topic. Provide an interim response to the problem. They indicate the researcher is to discover exactly what you need, we clarify the variables with which to be found and relations between them. They are the point starting the process of investigation. This in itself goes to affirm or deny hypotheses, to find out whether they are true or false. Its formulation involves tentative ideas about the factors that we consider crucial. In action research, it is necessary for the group to undergo a process of reflection on goals and issues before coming to clarify the hypothesis. A hypothesis-action indicates an action to perform, particularly to respond to self-reflection and self-understanding of the situation. Constitute the substance of the study.

3.5. Dynamics to Test the Hypothesis-Action (Design)

How to develop a good design for data collection?

A good design has to provide:

  • How to collect data (instruments).
  • How to organize the collection of data which show the influence of some factors over others, it is necessary to compare the data obtained by observation, interview, or test document.
  • How to select a sample.
  • What methodology to use, whether it is quantitative or qualitative.

The research team should be aware that the quantitative data (experimental field) are characterized by:

  • Control of intervening variables.
  • Manipulation of the independent variables.
  • The random drawing of samples.

The differences in the category of qualitative to quantitative methods involve these steps:

  • Reduction and operationalization of variables to the holistic consideration of the peculiar life.
  • Control-variables to the open and flexible observation of natural life.
  • Aseptic distancing-investigation, active participation of events.
  • Objective techniques to the interactive observation and interview.
  • Pre-design to design flexible and emergent.
  • Anomic report, the story alive.

The investigation must know both methodologies and use what it considers appropriate.

3.5.1. Choice of Instruments

To gain knowledge of reality, we must approach it with various methods and instruments since reality can be known from within and from outside, trying to describe, analyze, evaluate, or interpret what happens. The variety of methods, according to Selitiz, keeps greater credibility in the results. The instruments are the technical means used to facilitate the handling of data specific to each category and the objective record of the resulting behavior. They are used to sort, process, and interpret data obtained in her study. They must seek objectivity, representativeness, and accuracy.

3.5.2. Data Analysis

This, according to LaTorre and Gonzalez, is the stage of systematic search and reflection of the information obtained from the instruments. It is one of the most important processes of investigation and involves working with the data, collecting it, organizing it into manageable units, synthesizing, and seeking regularities. It is important to discover what will contribute to the category.

By analyzing the data, we intend to reduce our category to express them numerically and graphically.

Techniques and Procedures for Data Analysis

Qualitative analysis has the advantage of flexibility but requires great rigor to justify going through every step of the category. Clarify the purpose of analysis and explore the data we have to:

  • Collect all data.
  • Complete incomplete data that appears.

Here we can relieve the most relevant aspects of the category. We can make a first summary of data that will help us formulate a series of questions about:

  • What happens or how it happens. Indicate the first categorization and interpretation.
  • Why it happens: the first explicit associations found.
  • In case you need more data, we can use the search.

Review Procedures

A critical moment in the action research process, analysis can be performed by different approaches: deductive and inductive in nature, trying to build categories and concepts or trying to enumerate the frequencies of the observed phenomena to verify the theory.

As an analysis procedure, as Bartholomew states, we can say:

  • Standardized observation protocols come from a theory already developed, as the scale of observation of Flanders, simply frequency counts in each category. It represents behavior patterns, and data is obtained by ordering, within each time interval, the order in which they occurred.
  • Content analysis is often used for the systematic analysis of written documents, field notes, recorded interviews, diaries, etc. Try to study in detail the contents of written communication, oral and visual. Technical analysis and quantification of the materials.

3.6. Interpretation and Integration of Results

Once the data has been collected and analyzed, the researcher can answer the problems raised. In the educational category, the aim is to understand the reality, but not to stay at the mere understanding. To overcome this, we must:

  • Integrate the work we have done into a broader framework that illuminates our hypothesis-action; this will help the researcher, from his own theorizing, to develop a practical theory (Kemmis).
  • Make two levels of analysis: level one case and another that allows comparison of them and integrate them within a theoretical framework.

The interpretation of data requires a systematic and critical reflection of the whole process of investigation to contrast it with the theory and practical results.

Within an educational reality, it is contrasting scenarios, giving credibility or not and included in a theoretical framework to support practice.

3.7. Report on Action Research

The process should be followed to collect throughout the work to make it known to all concerned. It is to record what has been done, how it has been done, and the results achieved. Clearly present the objectives and the research problem. If you have reached or not the proposed objectives. The hypothesis-action (which we intend to do to improve the situation, or we must change and that way). How we collected and analyzed data. Refer to the group which has brought information. Making reference to the findings. Must collect the suggestions and tips to change the reality changes.

Lewin proposed a report identifying all stages to complete a cycle of action research.

Hopkyns believes that the report should gather data so that:

  • The category can be repeated on another occasion.
  • The evidence used to generate hypotheses and action are clearly documented.
  • The action taken as a result of the category is controlled.

To facilitate the process of preparing the report, we highlight the following phases:

  • How it evolved over time the very general idea.
  • How an understanding of the problem situation evolved over time.
  • What measures were taken in light of these developments.
  • The extent to which the proposed actions were launched.
  • Intentional or not the effects of one’s actions and explanations of why they occurred.
  • Techniques were selected to obtain the information about:
    • Problem situation and its causes.
    • Actions taken and their effects.
  • The problems encountered when using certain techniques and how they were resolved.
  • Posed any ethical problem in negotiating access to the information and how they would solve it.
  • Any problem that arises when dealing with other steps, time, resources, and cooperation that was desired for the development of “Action Research”.

3.8. Change in Practice

Action research does not end with the formulation of conclusions, more or less successful, and drafting the reports; it seeks ultimately to influence practice. We must ask:

What impact has the action research had on the team involved? How has it? Why? How and what can we do to improve this situation?

It requires a constant feedback process. Therefore, it never ends.

ITEM 4

RESEARCH EVALUATION

INTRODUCTION

Evaluation: This is not a qualification at the end of a period of teaching, not even only to check the progress of a student, a class, or an educational institution, but is a process to verify the utility of any object.

1. WHAT IS ASSESSMENT?

Definition: The evaluation is the systematic prosecution of the worth or merit of an object. (Joint Committee on Standards for Educational Evaluation).

From the moment the evaluation, in an optimal way, uses objective procedures to obtain reliable information, it is impartial and linked to the assessment. Its ultimate goal is to determine the value of something that is being prosecuted.

Evaluation is a complex undertaking; this means that we must be very careful not only when you compile the information, which must be of quality, but also when you clarify and provide a rationale to justify the valuation perspective used to interpret the results.

The terms “worth” and “merit” should be defined to enrich our concept of assessment. Although a meritorious program may not be valid, we must indicate that a program that is not meritorious cannot be validated. We must consider three criteria when making an evaluation:

  • The different customer expectations.
  • The merit or excellence of service.
  • The need for service (valid potential).

Besides these three aspects, we must consider two more: the feasibility and equity.

Example: Although a service may be of great quality, it may consume more resources than they need or can cause plenty of political problems. In either case, the program, in principle, should be modified in those areas to make it more viable.

Equity is predominantly associated with democratic societies. It requires equal opportunity for everyone and demands freedom for all.

As said Kellaghan, for there to be equality, there should exist seven signs of fairness:

  1. Educational services of a society must be geared towards the entire population.
  2. All sectors of society should have the same kind of access to services.
  3. All groups should enjoy equal participation in the use of services.
  4. Achievement levels are not significantly different for different groups.
  5. Skills required to achieve service objectives should not be substantially different for different groups.
  6. Life aspirations to be sought should be similar in all groups.
  7. The service system should produce similar results in regard to improving the welfare of all social groups.

Also, we must consider other three aspects of the evaluation:

  1. Is whether the evaluation needs to be comparative.
  2. Refers to the main uses and evaluation.
  3. Wondered how the evaluators need special qualifications.

Overall, we believe that the evaluation must be comparative before the start of the process or before subscribing to a service, not comparative in the process, and periodically after the trial comparative and/or continued use.

That assessment should be comparable or not depends on what you will get from this evaluation regarding their use is concerned. Considering three uses:

  • Further refers to the information that is provided to ensure the quality of a service or to improve it. We must pay close attention to the nature and needs of consumers, and the relationship between the process and results. Appraisers should be very in touch with program staff. Evaluation plan flexible and responsive.
  • Summary; retrospective analysis on completed projects. The information is not directed at the staff responsible for their development, but the sponsors and consumers. Should include an assessment of the effectiveness of the investment.
  • Exemplification; assessment and investigation are different activities. The first is devoted to considering all the criteria used in determining a value, while the latter is confined rather to the study of certain variables that have a theoretical interest. Evaluations often use subjective methods and are not as controlled or subject to manipulation as the investigations.

3. TYPES OR MODELS OF EVALUATION

3.1. Stufflebeam Classification

It has three large groups in which we frame the evaluation:

Types of AssessmentProcedure Definition
Pseudo-evaluationsPseudo-evaluations are evaluations that use the evaluation to achieve its goals: “Covert investigations – Studies of communication
Quasi-evaluationsSometimes quasi-evaluations aim to judge the value or merit of an object, other non-objectives-based studies based on experimental studies
True assessmentsTrue assessments are assessments that are based on the value and merit of the object being studied, “Studies of decision-oriented studies on the client-centered political studies-Studies based on the consumer

Pseudo-evaluations

  • Covert investigations, its organizers are a threat faced by customers. The intention of the client to commission a covert assistance category is to obtain, maintain, or increase a sphere of influence, power, or money. Typical methods with which they conducted covert assessments include analysis of documents, surveillance of those involved, the simulated studies, the investigations, private and secret files. Since information may be disclosed to create a false image of an object’s value, and undisclosed, to hide a situation, this method is called pseudo-evaluation.
  • Studies to communication studies planned, conducted, and used to satisfy purposes having to do with public relations, here the information needs of the propagandist are advance organizers. The intention of the study is helping the client to create a positive image of an institution, program, system, and the like. Typical methods in these studies are the inspection, testing, and use of expert advisors.

Quasi-evaluations

These studies begin with a specific problem and then seek the appropriate methodology to solve this problem.

  • Studies based on objectives, advance organizers are often based on any objective report. These can be determined by customers, by people whose work is being valued by the assessor … The purpose is to determine whether the objectives have been achieved and thus to conclude whether it has succeeded in her work which has been valued. The hearings are the most common software developers, sponsors, administrators, and service providers. Typical methods include the collection and analysis of data on the job. Its pioneer is Ralph Tyler.
  • Studies based on experimentation, are included in quasi-evaluation because sometimes questions and methodologies related to, but sometimes not, with the trial values.
    The investigation experimental reminds Kaplan’s warning against the “law of the instrument” under which the method is equivalent to a category. Advance organizers are exposing problems, hypotheses, and study issues. The most common purpose of an experiment is to determine or establish casual links between certain dependent variables and independent variables (teaching method, test scoring, …). Typical methods are experimental and quasi-experimental plans. The pioneers are Linquist, Campbell and Stanley, and Show Suchmam and Conbach.
    Advantage: Provide robust methods for the determination of relatively clear causal relationships between the program and results.
    Problems: Method often is not feasible to play in identifying the limits of the specialty and provides information much more restricted than is necessary to evaluate educational programs, health, and welfare. They tend to provide information at the end that is not useful to guide the development process.

Another type of study: verification of programs, systems information, administrative and liability studies.

The Actual Assessments

For Stufflebeam, these are those who avoid biased studies and recommend conducting investigations set based on questions of value and merit.

  • Studies of decision guidance stresses that the assessment should be used both for the refinement of a program to prosecute its value. The decisions taken to provide advance organizers for orientation studies of the decision. His intention is to provide knowledge and evaluative bases to make and justify decisions. Appropriate methods for conducting these studies are: inspections, needs assessment, case studies, series of recommendations, observations, and structural and quasi-experimental planning. His pioneering work was done by Cronbach.
    Advantage: Encourage professional groups, and government organizations to use the continuous and systematic evaluation to plan and carry out the services that meet customer needs.
    Limitation: The essential partnership between the evaluator and the decision-maker can lead to distortion of results. The external meta-evaluation is used to offset these deviations.
  • Studies focusing on the customer; focus mainly on helping those who perform daily service to assess and improve their contributions. Advance organizers are the concerns and problems of the service program. Its purpose is to help people who are part of a service to understand how crazy the needs of the service and the degree to which this is respected and valued. Typical methods include case studies, conflicting reports, skits, and what Stake called respondent evaluation. His work was pioneered by Stake.
    Virtue: It is a category that should be active in helping people with programs to address their own evaluations.
    Default: The absence of external credibility and the possibility of being manipulated by certain local elements, in effect, exert great control over the evaluation.
  • Political Studies; serve to identify and assess the merits of various policies that compete in a society. His advance organizer is a particular political issue. Its purpose is to describe and assess the potential costs and benefits of various policies applicable to an institution or society completed. The methods used include the Delphi Technique, and quasi-experimental planning, scripts, forecasts, and judicial procedures. His work was pioneered by Joseph Rice.
    Advantage: It is essential to guide the institutions and society.
    Disadvantage: Just corrupted by the political environment in which they develop.
  • Studies based on the consumer, the appraiser is the smart replacement for the consumer. Advance organizers are the values and needs of society. His property is to judge the relative values of alternative goods and services and thus help taxpayers and customers the ability to choose their purchases of these goods and services. Methods include checklists, needs assessment, evaluation of objectives, experimental and quasi-experimental planning, analysis of the modus operandi, and cost analysis. His work was pioneered by Scriven.
    Advantages: It is a solid independent assessment designed to protect consumers of products and services of poor quality.
    Disadvantage: Can become independent of both its practitioners who might be unable to assist in serving consumers better. It is very expensive.

3.2. Classification Of Ketele

Purpose of the Evaluation FunctionsActionPerformance Evaluation of TraineesMain Purpose of the EvaluationProcess of Evaluation FunctionsActionPerformance TraineesMajor Processes
General Decision Guidance AssessmentMoving to a new phase of actionSpecific decisions:
  1. Determine the objects (context evaluation).
  2. Determine the chances of success in action (predictive evaluation).
  3. Establish strategies and methods (assessment of “incoming”).
  4. Anticipating the reality (evaluation preventiva)
General decision: Starting a new learningSpecific decisions:
  1. Determine the direction that suits the person.
  2. Determine its success, assess their potential in a specific branch training (predictive evaluation).
  3. To determine the characteristics and needs of the person (assessment of “incoming”).
  4. Anticipating the reality (evaluation preventiva).
Needs AnalysisDiagnosis of strengths and weaknesses of the system or the person (or system audit diagnostic evaluation)
Regulatory AssessmentGeneral Decision taken during the process with a view to making more effective action.Specific decisions:
  1. Change the strategies.
  2. Modify human resources.
  3. Modify material resources.
General Decision Making Decision general measures to facilitate learning throughout the process.Specific Decisions
  1. Take measures regarding the individual learner (formative evaluation).
  2. Modify learning strategies at the group level (formative evaluation).
Diagnosis of the strengths and weaknesses of the product at the interim, procedures, and process (audit of a system, diagnostic assessment of learning).
Evaluation CertificationGeneral Decision Making regulatory measures during the process with a view to making the action more effectively.Specific decisions:
  1. Change the strategies.
  2. Modify human resources.
  3. Modify material resources.
General Decision Taking measures to facilitate learning throughout the process.Specific Decisions
  1. Take measures regarding the individual learner (formative evaluation).
  2. Modify learning strategies at the group level (formative evaluation).
Diagnosis of the strengths and weaknesses of the product at the interim, procedures, and process (audit of a system, diagnostic assessment of learning).
General Decision Certify the success or failure of the action, depending on the achievement of objectives.Specific decisions:
  1. Reciprocate the action.
  2. Redirect action.
  3. Admit it.
General Decision Certify the success or failure of the person based on a performance level required.Specific decisions:
  1. Choose or accept (evaluation of selection).
  2. The necessary certificates for life in society.
  3. Classify people (assessment classification)
Study of the short, medium, and long term. Balance (product evaluation, referring to a descriptive assessment criterion, or, conversely summative).

4. GOAL OF EVALUATION

Purpose: To Stufflebeam, it is utility to be useful as the core assessment should take an entire scientific process which supports it.

Objective: To increase effectiveness in conducting a program.

5. EVALUATION PROCESS RESEARCH

It is a process found and systematically and intentionally directed to realize the usefulness of an object to improve it. It demands rigor in the process.

5.1. The Process According to Different Authors

Following the order. Differences of the comparative assessment with the strict category:

  • Value judgments are made explicit not only in the selection and definition of the problem, but also in the development and implementation of study procedures.
  • It is difficult, and sometimes inappropriate, to formulate precise hypotheses.
  • Replication is impossible.
  • Data to be collected are determined by the viability of the process.
  • Many are only superficially relevant variables can be verified.
  • The decision criterion is the responsibility of the agency administering the program.
  • The evaluation report should be tailored to the requirements of the decision-maker.

Latorre et al. Possible phases of the evaluative category:

  1. The need, problem, complaint, hearing, expectations
  2. Purpose
  3. Variables, situations, areas, aspects
  4. Model design, strategy, procedure
  5. Data Collection
  6. Data Analysis
  7. Valuation Process
    • Trials positive and/or negative

5.2. Different Stages

Of Ketel. Stages considered valid for all types of evaluation:

Step 1. To articulate the evaluation objectives.

The first step in the formulation of the objectives is to conduct a needs analysis and report this analysis as the objectives could be training or action.

Step 2. Sub-objectives or criteria to be collected. Considering criteria regarding the performance of people, we can establish criteria for assessment and correction, weighting of success and resolution; behavioral selection.

Step 3. The determination of the information to be collected should refer to the objective and the criteria considered as signs of achievement.

Step 4. Determine a strategy for collecting information. After determining it was made:

  • Or seek adequate natural situations where this information can be collected,
  • Or raise or create a situation allowing the collection of information.

Step 5. Collect the information reliably. It comprises two phases:

  • Obtaining information through the instruments chosen.
  • Tabulate the information so that we can carry with it the analysis.

Step 6. Confronting the information collection with the set criteria. It is important that confrontations are held by a single person; if several have to agree.

Step 7. Formulation of the conclusions in a clear and precise way.

The report can be oral or written. In any process underlying the validity and reliability checks at each stage.

6. ASSESSMENT METHODS

One aspect that differentiates the systematic evaluation of the informal methodology.

We recommend an eclectic approach. Thus we suggest that evaluators should meet a wide range of appropriate techniques and know how to apply them in evaluative contexts. They could assess which techniques are potentially applicable and which can serve more effectively to the particular purposes of evaluation.

Technical areas: interview, preliminary reports, content analysis, observation, policy analysis, economic analysis, technical reports, case studies, listening to opposing views, checklists, test development, statistical analysis, theorizing, and project management.

7. REPORT

This is the document that has to be given to people, organizations, etc., who have commissioned the evaluation. It should be written clearly.

Points which must contain:

  • Brief description of the object to evaluate.
  • Points that are undergoing a study and assessment and why these points.
  • Model or design used to assess these points.
  • Techniques or methods used to collect information, with justification.
  • Exposure results in a concise and as complete as possible way.
  • Valuation of the whole process with positive and negative judgments. Information rating, collection, reporting on the courses of action, their possibilities, advantages, and disadvantages.
  • Possible decisions to make.

8. DECISION MAKING

It should be a serious plan for improvement if that was the goal, or an acceptance or rejection of objective value. It is the final stage.