Introduction to Mediation Analysis

This post intends to introduce the basics of mediation analysis and does not explain statistical details. For details, please refer the articles at the end of this post.

What is mediation?

Let’s say previous studies have suggested that higher grades predict higher happiness: X (grades) → Y (happiness). (This research example is made up for illustration purposes. Please don’t consider it a scientific statement.)

mediation_ex1

I think, however, grades are not the real reason that happiness increases. I hypothesize that good grades boost one’s self-esteem and then high self-esteem boosts one’s happiness: X (grades) → M (self-esteem) → Y (happiness).

mediation_ex2

This is a typical case of mediation analysis. Self-esteem is a mediator that explains the underlying mechanism of the relationship between grades (IV) and happiness (DV).

How to analyze mediation effects?

Before we start, please keep in mind that, as any other regression analysis, mediation analysis does not imply causal relationships unless it is based on experimental design.

To analyze mediation:
1. Follow Baron & Kenny’s steps
2. Use either the Sobel test or bootstrapping for significance testing.

The following shows the basic steps for mediation analysis suggested by Baron & Kenny (1986). A mediation analysis is comprised of three sets of regression: X → Y, X → M, and X + M → Y. This post will show examples using R, but you can use any statistical software. They are just three regression analyses!

# Download data online. This is a simulated dataset for this post.
myData <- read.csv('http://static.lib.virginia.edu/statlab/materials/data/mediationData.csv')

Step 1.
mediation_step1

$$Y = b_{0} + b_{1}X + e$$

Is \(b_{1}\) significant? We want X to affect Y. If there is no relationship between X and Y, there is nothing to mediate.

Although this is what Baron and Kenny originally suggested, this step is controversial. Even if we don’t find a significant association between X and Y, we could move forward to the next step if we have a good theoretical background about their relationship. See Shrout & Bolger (2002) for details.

model.0 <- lm(Y ~ X, myData)
summary(model.0)
# Coefficients:
#             Estimate Std. Error t value Pr(>|t|)    
# (Intercept)   2.8572     0.6932   4.122 7.88e-05 ***
# X             0.3961     0.1112   3.564 0.000567 ***

### b1 = 0.3961, p < .001  # significant!

Step 2.
mediation_step2

$$M = b_{0} + b_{2}X + e$$

Is \(b_{2}\) significant? We want X to affect M. If X and M have no relationship, M is just a third variable that may or may not be associated with Y. A mediation makes sense only if X affects M.

model.M <- lm(M ~ X, myData)
summary(model.M)
# Coefficients:
#             Estimate Std. Error t value Pr(>|t|)    
# (Intercept)  1.49952    0.58920   2.545   0.0125 *  
# X            0.56102    0.09448   5.938 4.39e-08 ***

### b2 = 0.5610, p < .001  # significant!

Step 3.
mediation_step3

$$Y = b_{0} + b_{4}X + b_{3}M + e$$

Is \(b_{4}\) non-significant or smaller than before? We want M to affect Y, but X to no longer affect Y (or X to still affect Y but in a smaller magnitude). If a mediation effect exists, the effect of X on Y will disappear (or at least weaken) when M is included in the regression. The effect of X on Y goes through M.

model.Y <- lm(Y ~ X + M, myData)
summary(model.Y)
# Coefficients:
#             Estimate Std. Error t value Pr(>|t|)    
# (Intercept)   1.9043     0.6055   3.145   0.0022 ** 
# X             0.0396     0.1096   0.361   0.7187    
# M             0.6355     0.1005   6.321 7.92e-09 ***

### b4 = 0.0396, p = 0.719   # the effect of X on Y disappeared!
### b3 = 0.6355, p < 0.001

If the effect of X on Y completely disappears, M fully mediates between X and Y (full mediation). If the effect of X on Y still exists, but in a smaller magnitude, M partially mediates between X and Y (partial mediation). The example shows a full mediation, yet a full mediation rarely happens in practice.

Once we find these relationships, we want to see if this mediation effect is statistically significant (different from zero or not). To do so, there are two main approaches: the Sobel test (Sobel, 1982) and bootstrapping (Preacher & Hayes, 2004). In R, you can use sobel() in ‘multilevel’ package for the Sobel test and mediate() in ‘mediation’ package for bootstrapping. Because bootstrapping is strongly recommended in recent years (although Sobel test was widely used before), I’ll show only the bootstrapping method in this example.

mediate() takes two model objects as input (X → M and X + M → Y) and we need to specify which variable is an IV (treatment) and a mediator (mediator). For bootstrapping, set boot = TRUE and sims to at least 500. After running it, look for ACME (Average Causal Mediation Effects) in the results and see if it’s different from zero. For details of mediate(), please refer to Tingley, Yamamoto, Hirose, Keele, & Imai (2014).

library(mediation)
results <- mediate(model.M, model.Y, treat='X', mediator='M',
                   boot=TRUE, sims=500)
summary(results)
#                Estimate 95% CI Lower 95% CI Upper p-value
# ACME             0.3565       0.2155       0.5291    0.00
# ADE              0.0396      -0.1761       0.2598    0.66
# Total Effect     0.3961       0.1563       0.5794    0.00
# Prop. Mediated   0.9000       0.5254       1.8820    0.00

### ACME = 0.3565, 95% CI [0.2155, 0.5291]  # significant!
### ACME stands for Average Causal Mediation Effects
### ADE stands for Average Direct Effects
### Total Effect is a sum of a mediation (indirect) effect and a direct effect

Note that the Total Effect in the summary (0.3961) is \(b_{1}\) in the first step: a total effect of X on Y (without M). The direct effect (ADE, 0.0396) is \(b_{4}\) in the third step: a direct effect of X on Y after taking into account a mediation (indirect) effect of M. Finally, the mediation effect (ACME) is the total effect minus the direct effect (\(b_{1} – b_{4}\), or 0.3961 - 0.0396 = 0.3565), which equals to a product of a coefficient of X in the second step and a coefficient of M in the last step (\(b_{2} \times b_{3}\), or 0.56102 * 0.6355 = 0.3565). The goal of mediation analysis is to obtain this indirect effect and see if it’s statistically significant.

By the way, we don’t have to follow all three steps as Baron and Kenny suggested. We could simply run two regressions (X → M and X + M → Y) and test its significance using the two models. However, the suggested steps help you understand how it works!

model.M <- lm(M ~ X, myData)
model.Y <- lm(Y ~ X + M, myData)
results <- mediate(model.M, model.Y, treat='X', mediator='M',
                   boot=TRUE, sims=100)
summary(results)

Mediation analysis is not limited to linear regression; we can use logistic regression or polynomial regression and more. Also, we can add more variables and relationships, for example, moderated mediation or mediated moderation. However, if your model is very complex and cannot be expressed as a small set of regressions, you might want to consider structural equation modeling instead.

To sum up, here’s a flowchart for mediation analysis!

mediation_flowchart

 

For more information:

  • Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 5, 1173-1182.
  • Shrout, P. E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychological Methods, 7, 422-445.
  • Tingley, D., Yamamoto, T., Hirose, K., Keele, L., & Imai, K. (2014). Mediation: R package for causal mediation analysis.


Bommae Kim
Statistical Consulting Associate
University of Virginia Library
April 18, 2016 (published)
July 12, 2016 (typos in flowchart corrected)

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu