Ethical Principles and Research Methods in Psychology

Chapter 4: The Tuskegee Study

The Tuskegee Study was an unethical U.S. Public Health Service study that observed the effects of untreated syphilis in 600 African American men without their informed consent. Participants were misled about their condition, falsely told they were receiving treatment for “bad blood,” and were denied penicillin even after it became the standard treatment in 1947.

The study caused deaths, disease transmission to spouses and children, and deep mistrust of the medical community among African Americans. It was exposed in 1972, leading to public outrage, a settlement for survivors, and President Bill Clinton’s formal apology in 1997. The study prompted stricter research ethics guidelines, including the Belmont Report and the establishment of Institutional Review Boards (IRBs).

Milgram Obedience Studies (1960s)

Purpose: Explore obedience to authority, inspired by Holocaust atrocities.

  • Setup: Participants (“teachers”) were told to administer electric shocks to a “learner” (actor) for wrong answers, under the supervision of an authority figure.
  • Findings:
    • 65% delivered the maximum 450-volt shock.
    • Ordinary people followed authority despite moral conflict.
  • Ethical Issues:
    • Deception (fake shocks).
    • Emotional distress for participants.
    • Lack of full informed consent.
  • Legacy: Highlights authority’s influence on behavior; prompted stricter ethics in research (e.g., informed consent, debriefing)

Belmont Report (1976): Key Principles

  1. Respect for Persons
    • Autonomy: Participants must make informed decisions about participation (informed consent required).
    • Protection: Extra safeguards for vulnerable populations (e.g., children, prisoners).
  2. Beneficence
    • Do No Harm: Minimize risks to participants.
    • Maximize Benefits: Ensure research has meaningful outcomes.
    • Risk-Benefit Analysis: Weigh risks against potential benefits.
  3. Justice
    • Fair Participant Selection: Avoid exploiting or excluding groups unfairly.
    • Equitable Access to Benefits: Ensure research benefits are distributed fairly.

Application: Basis for IRBs, informed consent, and ethical research guidelines (e.g., Common Rule).

APA Ethical Principles (Additional)

  1. Fidelity and Responsibility
    • Build trust and uphold professional duties.
    • Avoid conflicts of interest and promote ethical behavior.
  2. Integrity
    • Be honest, accurate, and transparent.
    • Justify deception and correct errors quickly.

Institutional Review Board (IRB)

  • Role: Ensures research with human participants is ethical and follows ethical principles.
  • Composition (5+ members):
    • Scientist
    • Non-scientist (e.g., someone from another academic field)
    • Community member (no ties to the institution)
    • Prisoner advocate (if the study involves prisoners).

Informed Consent: Consent may not be required in cases of…

  • Naturalistic observation in low-risk public settings
  • Self-report of non-intrusive questions

Types of Deception:

  • Omission: Withholding details.
  • Commission: Lying to participants.

Debriefing: Explains the deception and why it was necessary to restore trust with participants.

Data fabrication: Researcher manipulates data to fit a hypothesis

  • Altering, deleting, creating, etc.

Plagiarism: Representing another’s ideas as one’s own

  • Cite your sources!

Self-plagiarism: “Potentially unethical” practice of reusing one’s writings verbatim

Animal Research: The 3 R’s

  1. Replacement: Use alternatives to animals when possible (e.g., computer models).
  2. Refinement: Modify procedures to minimize animal stress or harm.
  3. Reduction: Use the fewest animals needed (e.g., efficient study designs).

Chapter 8: Correlations and Statistical Analyses

Pearson’s Correlation (r)

  • Measures the relationship between two continuous variables.
  • Range: -1 to 1.
    • Magnitude: Strength of the relationship (closer to 1 or -1 = stronger).
    • Sign: Direction of the relationship (+ = positive, – = negative).

Other Correlation Types

Spearman’s Rank Correlation:

  • Measures the relationship between two ranked (ordinal) variables.

Polychoric Correlation:

  • Estimates the relationship between two categorical variables with ordered levels.

(Point-)Biserial Correlation:

  • Measures the relationship between one continuous variable and one binary variable.
  1. Continuous Variable
    • Infinite numerical values (e.g., height, age, Likert ratings).
  2. Categorical (Nominal) Variable
    • Non-numerical categories (e.g., class standing, ethnicity, political belief).

T-Test for Categorical and Continuous Variables

  • Use a t-test when comparing:
    • Categorical variable (e.g., meeting location: online vs. in-person).
    • Continuous variable (e.g., marital satisfaction score).
  • A t-test checks if the groups (online vs. in-person) have different averages for the continuous variable (marital satisfaction).
  • The t-test results can also be converted into a Pearson’s correlation (r) to show the relationship strength.

Correlational Studies

  • What: Measure, don’t manipulate, variables.
  • Result: Can make association claims (e.g., “X is related to Y”).
  • Limit: Cannot confidently make causal claims.
  • Tip: A correlational study is about the study design, not just the correlation coefficient.

Linear Association

  • Scatterplot Line: Straight
  • Direction: As X increases or decreases, Y consistently increases or decreases.
  • Assumption: Most psychological research methods assume linear relationships.

Curvilinear Association

  • Scatterplot Line: Curved
  • Direction: As X increases or decreases, Y changes in different ways depending on X’s value (or another variable).
  • Assumption: Not assumed, but can be analyzed using quadratic terms (X2) or interaction effects.

Statistical Significance (Frequentist)

  • Test: Checks if results are unlikely under the null hypothesis (no effect).
  • Significant: If p < .05, the result is unlikely by chance.
  • Key Points:
    • Larger samples = more certainty.
    • If there’s an effect, larger samples make significance more likely.

Precision: Confidence Interval (CI)

  • What: Range where the true value likely falls (e.g., 95% CI).
  • Key Points:
    • If 95% CI includes 0, the result is NOT significant (p ≥ .05).
    • Larger samples = narrower CI = more certainty.
  • Tip: CI is linked to p-values (both test significance).

Precision: How to Read CIs (Confidence Intervals)

  • APA Style:
    {stat} = ##, 95% CI [{lower ##}, {upper ##}]
  • Example:
    r = -.57, 95% CI [-.77, -.37]
    • This means the true value likely falls between -0.77 and -0.37.
    • If CI doesn’t include 0, the result is significant.

Replication

  • What: Repeating studies to confirm results.
  • Why: Ensures findings are reliable and consistent.
  • Benefits:
    • Multiple studies = better estimate of true effect.
    • Used in meta-analyses (combine results for stronger conclusions).

“Statistical significance” is NOT the same as “practical significance”

Outliers

  • What: Extreme values that differ from the rest of the data.
  • Impact: Can bias results, especially in small samples.
  • Note: Check for outliers to ensure accurate relationships.

Outlier Treatment

  • Check: Is it an error (e.g., typo, mistake)?
    • If error: Remove the outlier.
  • Analyze: Compare results with and without the outlier to see its impact.

Spurious Associations

  • What: False associations caused by differences in subgroup averages.
  • Example: A link between two variables exists only because of subgroup differences, not a real relationship.
  • Fix: Control for subgroup effects to check if the association holds.

Restriction of Range

  • What: Part of a variable’s range is missing in the data.
  • Impact: Makes the correlation appear weaker (lowered).
  • Example: Studying GPA but only including students with GPAs above 3.0.

Association Claims – External Validity

Moderator: A variable that changes the relationship between two other variables

Interaction Effects

  • What: The relationship between one predictor and an outcome changes based on another predictor.
  • Key: Shows how variables work together, not just individually.
  • Example: Exercise improves mood more for people with good sleep compared to poor sleep.

Association Claims & Internal Validity

  • Causal Criteria:
    1. Covariance: Cause and effect occur together.
    2. Temporal Precedence: Cause happens before effect.
    3. No Confounds: No other factor explains the relationship.

Association Claims: Construct Validity

  • Key Question: Are the measures valid and reliable?
  • Why It Matters: Without valid data, any associations are meaningless.

Chapter 9: Multivariate Designs

Experiments & Causal Claims

  • Covariance: Cause and effect are tested together.
  • Temporal Precedence: Cause happens before the effect.
  • Internal Validity: Controlled design eliminates confounds.

Why Not Do an Experiment?

  • Impossible: Can’t manipulate some variables (e.g., personality traits).
  • Unethical: Can’t assign harmful conditions (e.g., harmful praise for kids)

Multiple Regression Variables

  • Correlational Studies:
    • Predictor (X) → Outcome/Criterion (Y)
  • Experimental Studies:
    • Independent Variable (X) → Dependent Variable (Y)

Third-Variable Problem

  • What: A missing variable explains the relationship between two others.
  • Result: Creates a spurious association (false relationship).

Multiple Regression

  • What: Measures the relationship between a predictor and outcome while controlling for other variables.
  • “Control for”: Means keeping other variables constant.
  • Covariates: The variables being controlled for.

Formula:
Y = a + b1X1 + b2X2
(Outcome = Intercept + Predictor 1 + Predictor 2)

Simple vs. Multiple Regression

  • Simple Regression:
    • Examines one predictor and its effect on an outcome.
    • Example: How study time (predictor) affects test scores (outcome).
    • Formula: Y = a + bX.
  • Multiple Regression:
    • Examines multiple predictors and their effects on an outcome while controlling for others.
    • Example: How study time and sleep (predictors) together affect test scores (outcome).
    • Formula: Y = a + b1X1 + b2X2.

Multiple Regression & Moderators

  • Controlled Variables: Held constant to isolate the main predictor’s effect.
  • Not Moderators: Moderators change the relationship between two variables based on their value.

Key Difference: Controlled variables don’t influence the strength/direction of relationships, but moderators do!

Regression Coefficients

  • What: Show the unique impact of a predictor on the outcome.
  • How: Shared variance with other predictors is removed (factored out).
  • Same as: The concept behind partial correlations.

Third Variable Example

  • Scenario: Age influences both TV content exposure and pregnancy risk.
  • Why It’s a Third Variable:
    • Older age → More adult TV content.
    • Older age → Higher likelihood of pregnancy.
  • Result: Age explains the observed correlation, making it a spurious association.

Beta Coefficient (β)

  • What: Standardized regression coefficient.
  • Similar to: A correlation coefficient (r), but used in regression to show the strength and direction of the relationship between a predictor and the outcome.

Unstandardized vs. Standardized Coefficients

  • Unstandardized (b):
    • For each 1-unit increase in X, Y changes by b units, holding other variables constant.
  • Standardized (β):
    • For each 1 standard deviation increase in X, Y changes by β standard deviations, holding other variables constant.

Problems with Too Many Predictors

  1. Collinearity:
    • Predictors overlap too much, making it hard to identify which truly explains the variance.
  2. Overfitting:
    • Model fits the current sample too well, reducing generalizability to other samples.

Limitations for Internal Validity

  • Can:
    • Establish covariance.
    • Rule out confounds (if measured and included).
  • Cannot:
    • Establish temporal precedence (cause must happen before effect).

Mediation vs. Moderation

  • Mediation:
    • Mnemonic: “Mediation → Middle → Explains why.”
    • Definition: A mediator sits in the middle, explaining why one variable affects another.
    • Example: Exercise improves sleep, which improves mood (sleep = mediator).
  • Moderation:
    • Mnemonic: “Moderation → Modification → For whom or what type.”
    • Definition: A moderator changes the strength or direction of a relationship based on its value.
    • Example: Exercise improves mood more for people with good sleep habits (sleep = moderator).

Longitudinal Designs

  • What: Measure the same variables in the same people multiple times.
  • Why: Helps establish temporal precedence (cause before effect).
  • Three Key Associations:
    1. Cross-sectional: Correlation between variables at the same time.
    2. Autocorrelations: Correlation of a variable with itself over time.
    3. Cross-lag: Tests whether one variable predicts another over time.

Brummelman et al. (2015)

  • Purpose: Study link between parental overvaluation and child narcissism.
  • Participants: 565 children and parents (Netherlands).
  • Method: Longitudinal design; data every 6 months over 24 months.
  • Measures:
    • Children: Self-reported narcissism.
    • Parents: Self-reported overvaluation.

Cross-Sectional Correlations

  • What: Correlation between two variables measured at the same time.
  • Can They Be Done Without Longitudinal Design? Yes, cross-sectional correlations can be obtained in a single-time-point study.

Autocorrelations

  • What: Correlation of a variable with itself over time.
  • Mnemonic: “Auto” = “Self.”
  • Example: Comparing a person’s happiness scores at different time points.

Cross-Lag Effects

  • What: Correlation between an earlier measure of one variable and a later measure of another variable.
  • Why: Helps assess temporal precedence (which variable predicts the other over time).

Limitations of Multiple Regression (Internal Validity)

  • Can:
    • Establish covariance.
    • Inform temporal precedence (if measures consider timing, e.g., “past 6 months”).
  • Cannot:
    • Fully rule out all confounding variables.

Chapter 14: Replication, Generalization, and the “Real World”

Types of Replication

  1. Direct Replication:
    • Repeats the original study exactly to confirm results.
    • Purpose: Test reliability of findings.
  2. Conceptual Replication:
    • Tests the same research question but uses different methods or measures.
    • Purpose: See if findings generalize across variations.
  3. Replication-and-Extension:
    • Repeats the original study but adds new elements (e.g., extra variables or conditions).
    • Purpose: Confirm findings while exploring new questions.
  4. Meta-Analyses:
    • Combines results from many studies on the same topic.
    • Purpose: Provide a big-picture view of evidence and estimate the true effect size.

Open Science Collaboration (2015)

  • What: Replicated 100 studies from major psychology journals using 100 labs worldwide.
  • Goal: Assess the replicability of psychological research.
  • Result: Only 39% of studies successfully replicated the original findings, highlighting issues with reproducibility in psychology.

Why Didn’t the Studies Replicate?

  1. Contextually Sensitive Effects:
    • Small changes in context can affect results.
    • Examples:
      • Charitable appeal: Original study used letters about AIDS, but replication used emails about environmental causes.
      • Diversity study: Original with Stanford students, replication with Dutch students, where university systems differ.
  2. Not Enough Replications:
    • Single replications can miss effects due to variability.
    • Solution: Projects like Many Labs Project (MLP) conduct multiple replications (up to 36 per study).
    • Result: Replication success rate increased to 85% when combining results across replications.

Meta-Analysis

  • What: Combines results from many studies to summarize overall findings.
  • How: Treats individual studies as data points in the analysis.
  • Challenge:
    • File Drawer Problem: Null or opposite results often remain unpublished.
    • Solution: Contact researchers for unpublished work to reduce bias.

Transparency in Research

  • Problem: “Publish or Perish” culture prioritizes publications over quality, leading to:
    1. Pressure to publish for jobs, tenure, and grants.
  • Result: Questionable Research Practices (QRPs):
    1. p-hacking: Tweaking analyses to find significant results.

Manipulating data or analysis to force a statistically significant p-value.

  • Examples:
    • Adding participants after initial analysis.
    • Removing outliers to change results.
    • Trying multiple analyses to find significance.
  • Impact: Undermines the validity of research findings.
  1. HARKing: Hypothesizing After Results are Known (pretending post hoc findings were preplanned).
  • Impact: Undermines transparency and trust in research.

Creating a hypothesis after seeing the results to make it seem preplanned.

Why It’s a Problem:

  • Misrepresents the scientific process.
  • Makes findings appear more predictable and reliable than they are.

Open Science Framework (OSF)

What: Open-source platform for transparent research.

Features:

  • Pre-register hypotheses and expected outcomes.
  • Archive study materials, data, and analysis code.

Benefit: Increases credibility (and you can earn badges for transparency!).