Research Methods in Psychology: Key Concepts
Correlation vs. Causation in Research
Karpinski found, in a correlation study, that students who used Facebook had lower GPAs than those who didn’t (3.0 to 3.5 vs. 3.5 to 4.0) and spent less time studying (1-5 hours vs. 11-15 hours). However, correlation doesn’t equal causation. People who prefer more leisure time are more likely to be drawn to distractions like Facebook. Alternatively, students who use Facebook might also spend more time on other non-studying activities, such as sports or music. Also, people did not provide their exact GPA; the study likely just asked, “Which GPA range does yours fall into?” At the time of the study, 80% of people had Facebook, so we do not know if everyone who had it used it.
Bias in Research
Confirmation bias is our tendency to cherry-pick information that confirms our existing beliefs or ideas. Researchers may see the results they wish to see. This is related to illusory correlation: the perception of a relationship where none exists, or the perception of a stronger relationship than actually exists (e.g., the belief that people get pregnant after adopting, or that it always rains on the weekend).
Basic vs. Applied Research
Basic research answers fundamental questions about the nature of behavior (e.g., “Is extraversion related to sensation-seeking?”) and evaluates theories. Applied research addresses issues in which there are practical problems and potential solutions (e.g., program evaluation, which assesses the social reforms and innovations that occur in institutions).
Theory and Hypothesis
A theory is a partially verified statement of a scientific relationship that cannot be directly observed. A hypothesis is a tentative statement about the expected relationship between variables. One theory generates multiple hypotheses; we test hypotheses to verify a part of a theory.
Importance of Literature Review
Why should you review the literature before running a study?
- Gain ideas about hypotheses, variables, design, materials, and procedures.
- Keep up-to-date on empirical and theoretical issues.
- Avoid needless duplication of effort.
The Belmont Report
The Belmont Report outlines three ethical principles:
- Respect for persons (autonomy): Participants should be able to decide whether to participate.
- Beneficence: Research should maximize benefits and minimize risks.
- Justice: Fairness in distributing the risks and benefits of research.
Research Risks and Benefits
Risks include psychological and physical harm and loss of confidentiality. Benefits include educational benefits, treatment for a psychological or medical problem, monetary payment, gifts, satisfaction gained from contributing to research, and beneficial applications of the research findings.
Informed Consent
A consent form should include:
- The purpose of the research.
- Procedures that will be used, including time involved.
- Risks and benefits.
- Any compensation.
- Confidentiality.
- Assurance of voluntary participation and permission to withdraw.
- Contact information for questions.
When is Deception Acceptable?
Deception is permissible when:
- It is necessary to decrease bias.
- It is used to make advancements for a large number of people.
- The deception is not too harmful to the participant.
Debriefing
Debriefing occurs after the study and includes an explanation of the purposes of the research. It is an opportunity for the researcher to deal with issues of withholding information, deception, and potential harmful effects of participation.
Institutional Review Board (IRB)
The Institutional Review Board is responsible for reviewing research conducted within an institution to ensure it is ethically sound, intentionally organized, and has implications that can be generalized.
Ethical Issues in Classic Studies
Milgram’s Obedience Study
Examined conditions under which participants would follow orders and deliver electrical shocks. Ethical Issues (EI): Deception, stress, pressure to continue. Debriefing may have been insufficient. However, most participants later said they were glad to have participated. Improvements: Ensure thorough debriefing about fake shocks, allowing participants to ask questions. Participants disobeyed when the experimenter had less status, someone else disobeyed first, participants were farther from the experimenter, participants were closer to the victim, or participants were told they weren’t responsible.
Stanford Prison Experiment
Participants played the role of guard or prisoner. EI: Psychological harm, mistreatment, participants not allowed to leave. Improvements: Ensure participants are not mentally humiliated and are allowed to leave at any time.
Email to Professors Study
Examined if professors responded differently to inquiries about PhD programs from students of underrepresented groups. EI: Professors unaware of the study, potential for wasted time, deception. Justification: Deception was justified to maintain validity. Improvements: Debrief participants and ensure anonymity.
Operational Definitions
Operationally define: Outline a set of procedures used when you measure a variable. The predictor variable is the independent variable (x), and the criterion variable is the dependent variable (y).
Experimental vs. Nonexperimental Designs
Nonexperimental: Two or more variables are measured, and a relationship is established. No independent variables are manipulated. Good for describing and predicting behavior, but difficult to establish causal relationships.
Experimental: An independent variable (IV) is manipulated (with at least two levels), and a dependent variable (DV) is measured. The goal is to show a causal relationship between the IV and DV.
Inferring Causality
Why shouldn’t we infer causality from correlational data?
- Directionality problem: Difficult to specify the direction of causation (e.g., violent video games and aggressive behavior).
- Third-variable problem: An unmeasured variable may cause changes in both observed variables (e.g., ice cream sales and drowning).
Elements to Infer Cause and Effect
- Temporal precedence: The causal variable should come first.
- Covariation: Demonstrated when participants in an experimental condition show the effect, while those in a control condition do not.
- Elimination of plausible alternative explanations: Achieved through random assignment and experimental control.
Internal and External Validity
Internal validity: The ability to draw conclusions about causal relationships. High internal validity means no alternative explanations. Important in basic research. A confounding variable is correlated with the IV, making it unclear if the IV or the confounding variable is causing the DV.
External validity: The degree to which results generalize beyond the sample and research setting. Important in applied research.
Laboratory vs. Field Settings
Laboratory setting: Artificial location, affording greatest control over extraneous variables. Good for internal validity.
Field setting: Real-world environment, where the behavior usually takes place. Usually good for external validity.
Mall Study Example
Examined differences in treatment of shoppers of different weights. Method: Shoppers asked for a gift recommendation. Independent variables: Weight (average or obese), Clothing (professional or casual). Dependent variables: Measures of discrimination. Results: Shorter interaction length and more negative emotion words used with obese shoppers, especially in casual attire. Ethical issues: No informed consent for store employees, deception. Methodological issues: External validity: Good because it’s a field setting, but limited to female, white shoppers. Internal validity: Possible confounds (e.g., researchers’ behavior, hot day, different stores).
Reliability and Validity of Measures
Reliability: The consistency or stability of a measure. Types: Test-retest reliability, internal consistency, interrater reliability. Minimum acceptable value: r = 0.7.
Validity: Extent to which the result approaches truth.
- Face validity: The content appears to measure what it’s designed to measure.
- Content validity: The content adequately samples the larger universe of behaviors.
- Concurrent validity: Scores on the measure are related to a criterion measured at the same time.
- Predictive validity: Scores predict behavior on a criterion measured in the future.
- Convergent validity: Scores are related to other measures of the same construct.
- Discriminant validity: Scores are not related to measures that are theoretically different.
Reactive Measures
A measure is reactive if awareness of being measured changes an individual’s behavior. Minimize reactivity by using nonobtrusive measures or waiting until subjects are used to being observed. Example: A student’s performance on a test declining after a teacher stands over the student, watching him work the whole time.