Latest Posts

Visualizing the Effects of Proportional-Odds Logistic Regression

Proportional-odds logistic regression is often used to model an ordered categorical response. By “ordered”, we mean categories that have a natural ordering, such as “Disagree”, “Neutral”, “Agree”, or “Everyday”, “Some days”, “Rarely”, “Never”. For a primer on proportional-odds logistic regression, see our post, Fitting and Interpreting a Proportional Odds Model. In this post we demonstrate how to visualize a proportional-odds model in R.

To begin, we load the effects package. The effects package provides functions for visualizing regression models. This post is essentially a tutorial for using the effects package with proportional-odds models. We also load the car, MASS and splines packages for particular functions, which we’ll explain as we encounter them. If you don’t have the effects or car packages, uncomment the lines below and run them in R. MASS and splines are recommended packages that come with R.

> # install.packages("effects")
> # install.packages("car")

> library(effects)
> library(car)
> library(MASS)
> library(splines) 

To demonstrate how to visualize a proportional-odds model we’ll use data from the World Values Surveys (1995-1997) for Australia, Norway, Sweden, and the United States. This dataset, WVS, comes with the effects package. Once we load the effects package, the data is ready to access.

> head(WVS)
      poverty religion degree country age gender
1  Too Little      yes     no     USA  44   male
2 About Right      yes     no     USA  40 female
3  Too Little      yes     no     USA  36 female
4    Too Much      yes    yes     USA  25 female
5  Too Little      yes    yes     USA  39   male
6 About Right      yes     no     USA  80 female

The response variable of interest is poverty, which is a 3-level ordered categorical variable. It contains the answer to the question “Do you think that what the government is doing for people in poverty in this country is about the right amount, too much, or too little?” The answers are an ordered categorical variable with levels “Too Little”, “About Right”, and “Too Much”. The other variables will serve as our predictors. These include country, gender, religion (belong to a religion?), education (hold a university degree?), and age in years. The data contains 5381 records.

Before we get started we need to note this example comes from an article on the effects package in the Journal of Statistical Software by John Fox and Jangman Hong, the authors of the effects package. You should definitely take the time to read through that article and cite it if you plan to use the effects package for your own research. What we seek to do in this blog post is elaborate on the example and provide some additional details.

Before we can visualize a proportional odds model we need to fit it. For this we use the polr function from the MASS package. The first model we fit models poverty as a function of country interacted with gender, religion, degree and age. The interaction allows the effects of the predictors to vary with each country.

> wvs.1 <- polr(poverty ~ country*(gender + religion + degree + age), data = WVS)
> summary(wvs.1)

Call:
polr(formula = poverty ~ country * (gender + religion + degree + 
    age), data = WVS)

Coefficients:
                               Value Std. Error  t value
countryNorway              0.5308176   0.286989  1.84961
countrySweden              0.5446552   0.546029  0.99748
countryUSA                -0.0347317   0.248059 -0.14001
gendermale                 0.0696120   0.090212  0.77165
religionyes                0.0094685   0.112476  0.08418
degreeyes                 -0.1242920   0.167603 -0.74158
age                        0.0155849   0.002597  6.00185
countryNorway:gendermale   0.1873611   0.144503  1.29659
countrySweden:gendermale   0.0563508   0.154414  0.36493
countryUSA:gendermale      0.2119735   0.139513  1.51938
countryNorway:religionyes -0.2186724   0.216256 -1.01118
countrySweden:religionyes -0.8789724   0.513263 -1.71252
countryUSA:religionyes     0.6002277   0.174433  3.44101
countryNorway:degreeyes    0.0558595   0.208202  0.26829
countrySweden:degreeyes    0.6281743   0.214295  2.93136
countryUSA:degreeyes       0.3030866   0.206394  1.46848
countryNorway:age         -0.0157142   0.004367 -3.59846
countrySweden:age         -0.0092122   0.004657 -1.97826
countryUSA:age             0.0005419   0.003975  0.13635

Intercepts:
                       Value   Std. Error t value
Too Little|About Right  0.7161  0.1535     4.6644
About Right|Too Much    2.5355  0.1578    16.0666

Residual Deviance: 10347.07 
AIC: 10389.07 

The summary output is imposing. In addition to 19 coefficients we have 2 intercepts. Larger coefficients with large t-values are indicative of important predictors, but with so many interactions it’s hard to to see what’s happening or what the model “says”. To evaluate whether the interactions are significant, we use the Anova function from the car package. By default the Anova function returns Type II tests, which tests each term after all others, save interactions. (The base R anova function performs Type I tests which tests each term sequentially.)

> Anova(wvs.1)
Analysis of Deviance Table (Type II tests)

Response: poverty
                 LR Chisq Df Pr(>Chisq)    
country           250.881  3  < 2.2e-16 ***
gender             10.749  1  0.0010435 ** 
religion            4.132  1  0.0420698 *  
degree              4.284  1  0.0384725 *  
age                49.950  1  1.577e-12 ***
country:gender      3.049  3  0.3841657    
country:religion   21.143  3  9.833e-05 ***
country:degree     12.861  3  0.0049476 ** 
country:age        17.529  3  0.0005501 ***

The Anova result shows all interactions except country:gender are significant. But what do the interactions mean? How do, say, country and age interact?

This is where the effects package enters. The effects package allows us to easily create effect displays. What are effect displays? The documentation for the effects package explains it this way:

“To create an effect display, predictors in a term are allowed to range over their combinations of values, while other predictors in the model are held to typical values.”

In other words, we take our model and use it to calculate predicted values for various combinations of certain “focal” predictors while holding other predictors at fixed values. Then we plot our predicted values versus the “focal” predictors to see how the response changes. Let’s demonstrate.

The two primary functions are Effect and plot. Effect generates the predictions and plot creates the display. Let’s say we’re interested in the age and country interaction. We want to visualize how age affects views on poverty for each country. Since our model includes an interaction, which was significant, we expect to see different trajectories for each country.

The following code generates the predicted values. The first argument, focal.predictors, is where we list the predictors we’re interested in. Notice it requires a vector, which is why we use the c() function. The second argument is the fitted model.

> Effect(focal.predictors = c("age","country"), wvs.1)

age*country effect (probability) for Too Little
    country
age  Australia    Norway    Sweden       USA
  20 0.5958889 0.5632140 0.6496015 0.4330782
  30 0.5578683 0.5635318 0.6349615 0.3939898
  40 0.5191570 0.5638496 0.6200674 0.3562127
  50 0.4802144 0.5641674 0.6049438 0.3201442
  60 0.4415107 0.5644851 0.5896166 0.2861049
  70 0.4035049 0.5648027 0.5741134 0.2543308
  80 0.3666230 0.5651203 0.5584631 0.2249734
  90 0.3312402 0.5654379 0.5426958 0.1981045

age*country effect (probability) for About Right
    country
age  Australia    Norway    Sweden       USA
  20 0.3050532 0.3250955 0.2699789 0.3918455
  30 0.3282699 0.3249058 0.2797785 0.4064109
  40 0.3502854 0.3247160 0.2895692 0.4171736
  50 0.3704966 0.3245261 0.2993161 0.4237414
  60 0.3883075 0.3243362 0.3089823 0.4258700
  70 0.4031610 0.3241462 0.3185295 0.4234793
  80 0.4145714 0.3239561 0.3279182 0.4166592
  90 0.4221528 0.3237659 0.3371079 0.4056632

age*country effect (probability) for Too Much
    country
age   Australia    Norway     Sweden       USA
  20 0.09905788 0.1116905 0.08041952 0.1750762
  30 0.11386182 0.1115624 0.08526005 0.1995993
  40 0.13055755 0.1114343 0.09036331 0.2266137
  50 0.14928897 0.1113065 0.09574006 0.2561143
  60 0.17018180 0.1111787 0.10140107 0.2880252
  70 0.19333407 0.1110511 0.10735706 0.3221899
  80 0.21880555 0.1109236 0.11361867 0.3583674
  90 0.24660694 0.1107962 0.12019630 0.3962323

Notice the output lists three sections of probabilities corresponding to each level of the response. The first section lists predicted probabilities for answering “Too Little” for each country for ages ranging from 20 to 90 in increments of 10. The predicted probability a 20-year-old from the USA answers “Too Little” is about 0.43. The second section is for “About right”. The predicted probability a 20-year-old from the USA answers “About Right” is about 0.39. Finally, the third section is for “Too Much”. The predicted probability a 20-year-old from the USA answers “Too Much” is about 0.18.

This information is much easier to digest as an effect display. Simply insert the original Effect function call into the plot function to create the effect display.

> plot(Effect(focal.predictors = c("age","country"), wvs.1), rug = FALSE)

Now we see how the model “works”. For example, in the upper right plot, we see that in the USA, the probability of answering “Too Much” increases rather dramatically with age, while the probabilities for answering the same in Norway and Sweden stay low and constant. Likewise we see that the probability of USA respondents answering “Too Little” decreases with age while the probabilities for Norway and Sweden stay rather high and constant. The effect display shows us where the interactions are happening and to what degree. (Note: setting rug = FALSE turns off the rug plot. When set to TRUE, the default, the marginal distribution of the predictor is displayed on the x axis. In this case we don’t find it very helpful since we have so much data.)

Recall that we need to use all predictors to generate these plots. Country and age are the focal predictors, so they are varied. This means gender, religion and degree are held fixed. We can find out the values they’re fixed at by saving the result of the Effect function and viewing the model matrix.

> e.out <- Effect(focal.predictors = c("age","country"), wvs.1)
> e.out$model.matrix[1,c("gendermale","religionyes","degreeyes")]
 gendermale religionyes   degreeyes 
  0.4935886   0.8539305   0.2124140 

This says gender was set to 0.4935886, religion to 0.8539305, and degree to 0.2124140. In the original data these are indicator variables that take values of 0 or 1 corresponding to No and Yes. Does it make sense to plug in decimals instead of 0s or 1s? It does if you think of modeling a population that is about 49% men, 85% religious, and 21% with a college degree. In fact this describes our sample. By default, the effects package will take the mean of numeric variables that are held fixed. We can verify that’s what it’s doing:

> mean(WVS$gender=="male")
[1] 0.4935886
> mean(WVS$religion=="yes")
[1] 0.8539305
> mean(WVS$degree=="yes")
[1] 0.212414

If we want to change these values, we can use the given.values argument. It needs to be a named vector that uses the terms as listed in the output summary. For example, to create an effect plot for religious men without a college degree:

> e.out <- Effect(focal.predictors = c("country","age"), mod = wvs.1, 
                given.values = c(gendermale = 1, religionyes = 1, degreeyes = 0))
> plot(e.out, rug = FALSE)

We see that the overall “story” of the display does not change; the same changes are happening in the same plots. But the overall probabilities have increased by evidence of the y axis now topping out at 0.7.

We can also change the range and number of focal predictors using the xlevels argument. For example, we can set age to range from 20 to 80 in steps of 10. Notice it needs to be a named list.

> e.out <- Effect(focal.predictors = c("country","age"), mod = wvs.1,
                 xlevels = list(age = seq(20,80,10)))
> plot(e.out, rug = FALSE)

We can investigate other interactions using the same syntax. Below is the effect display for the religion and country interaction. In this case, age is no longer a focal predictor and is held fixed at its mean (45.04).

> plot(Effect(focal.predictors = c("religion","country"), wvs.1), rug = FALSE)

We notice, for example, that religious people in the USA have a higher probability of answering “Too Much” compared to their counterparts in the other countries surveyed. We also notice there is much more uncertainty about estimates in Sweden for people who answered “No” to the religion question. This is due to the small numbers of respondents in those categories, as we can see with the xtabs function.

> xtabs(~ religion + country + poverty, data = WVS, country == "Sweden", 
        drop.unused.levels = TRUE)
, , poverty = Too Little

        country
religion Sweden
     no       7
     yes    597

, , poverty = About Right

        country
religion Sweden
     no       5
     yes    369

, , poverty = Too Much

        country
religion Sweden
     no       3
     yes     22

Following the example in Fox’s article, let’s fit another model that relaxes the linearity assumption for age. We can do this by generating what’s called a basis matrix for natural cubic splines. Instead of fitting a regular polynomial such as age + age^2, we fit piecewise cubic polynomials over the range of age separated by a certain number of intervals, or knots. The ns function in the splines package makes this easy to do. Below we use it in the model formula and specify 4 knots. Harrell (2001) suggests 3-5 knots is usually a good choice (p. 23), so 4 seems wise in this case.

> wvs.2 <- polr(poverty ~ country*(gender + religion + degree + ns(age, 4)),data = WVS)
> summary(wvs.2)
> Anova(wvs.2)

Due to space considerations we don’t print the output of the summary and Anova functions. The model summary shows information for 31 coefficients and is very difficult to interpret. The Anova result is similar in substance to the first model, showing all interactions except country:gender significant. It’s worth noting the AIC of the second model is considerably lower than the first (10373.85 vs 10389.07), suggesting a better fit.

Let’s look at the country and age interaction while allowing age to range from 20 – 80:

> plot(Effect(focal.predictors = c("country","age"), mod = wvs.2, 
            xlevels = list(age = 20:80)), rug = FALSE)

We see that using a natural spline allows a nonlinear effect of age. For example we see the probability of answering “Too Little” in the USA decreases sharply from 20 to 30, increases from about age 30 to 45, and then decreases and levels out through age 80.

The effects package also allows us to create “stacked” effect displays for proportional-odds models. We do this by setting style="stacked" in the plot function.

> plot(Effect(focal.predictors = c("country","age"), mod = wvs.2, 
            xlevels = list(age = 20:80)), 
     rug = FALSE,
     style="stacked")

This plot is useful for allowing us to compare probabilities across the response categories. For example, in Norway and Sweden, people are most likely to answer “Too Little” regardless of age. The blue shaded regions dominate their graphs.

We can also create a “latent” version of the effect display. In this plot, the y axis is on the logit scale, which we interpret to be a latent, or hidden, scale from which the ordered categories are derived. We create it by setting latent = TRUE in the Effect function.

> plot(Effect(focal.predictors = c("country","age"), mod = wvs.2, 
            xlevels = list(age = 20:80),
            latent = TRUE),
     rug = FALSE,
     ylim = c(0,3.5))

This plot is useful when we’re more interested in classification than probability. The horizontal lines in the plots correspond to the intercepts in the summary output. We can think of these lines as threshholds that define where we crossover from one category to the next on the latent scale. The TL-AR line indicates the boundary between the “Too Little” and “About Right” categories. The AR-TM line indicates the boundary between the “About Right” and “Too Much” categories. Like the “stacked” effect display, we see that someone from Norway and Sweden would be expected to answer “Too Little” regardless of age, though the confidence ribbon indicates this expectation is far from certain, especially for older and younger respondents. On the other hand, most USA respondents are expected to answer “About Right”.

Let’s see how the “latent” plot changes when we set the non-focal predictors to college-educated, non-religious female.

> plot(Effect(focal.predictors = c("country","age"), mod = wvs.2, 
            xlevels = list(age = 20:80),
            given.values = c(gendermale = 0, religionyes = 0, degreeyes = 1),
            latent = TRUE),
     rug = FALSE,
     ylim = c(0,3.5))

Notice now that predicted classification for Sweden is “About Right” over the age range but with increased uncertainty. We also see increased chances of answering “Too Little” for certain age ranges in the USA.

We can add gender as a focal predictor to compare plots for males versus females:

> plot(Effect(focal.predictors = c("country","age","gender"), mod = wvs.2, 
            xlevels = list(age = 20:80),
            latent = TRUE),
     rug = FALSE,
     ylim = c(0,3.5))

Since we didn’t fit a 3-way interaction between country, gender and age, the trajectories do not change between genders. They simply shift horizontally between the two levels of gender.

References

Fox, J. and J. Hong (2009). Effect displays in R for multinomial and proportional-odds logit models: Extensions to the effects package. Journal of Statistical Software 32:1, 1–24, <http://www.jstatsoft.org/v32/i01/>.

Harrell, Frank. Regression Modeling Strategies. Springer, 2001.

R Core Team (2017). R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. URL https://www.R-project.org/.

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu

Clay Ford
Statistical Research Consultant
University of Virginia Library
May 10, 2017

Getting started with the purrr package in R

If you’re wondering what exactly the purrr package does, then this blog post is for you.

Before we get started, we should mention the Iteration chapter in R for Data Science by Garrett Grolemund and Hadley Wickham. We think this is the most thorough and extensive introduction to the purrr package currently available (at least at the time of this writing.) Wickham is one of the authors of the purrr package and he spends a good deal of the chapter clearly explaining how it works. Good stuff, recommended reading.

The purpose of this article is to provide a short introduction to purrr, focusing on just a handful of functions. We use some real world data and replicate what purrr does in base R so we have a better understanding of what’s going on.

We visited Yahoo Finance on 13 April 2017 and downloaded about three weeks of historical data for three companies: Boeing, Johnson & Johnson and IBM. The following R code will download and unzip the data in your current working directory if you wish to follow along.

URL <- "http://static.lib.virginia.edu/statlab/materials/data/stocks.zip"
download.file(url = URL, destfile = basename(URL))
unzip(basename(URL))

We have three CSV files. In the spirit of being efficient we would like to import these files into R using as little code as possible (as opposed to calling read.csv three different times.)

Using base R functions, we could put all the file names into a vector and then apply the read.csv function to each file. This results in a list of three data frames. When done we could name each list element using the names function and our vector of file names.

# get all files ending in csv
files <- list.files(pattern = "csv$") 
# read in data
dat <- lapply(files, read.csv)
names(dat) <- gsub("\\.csv", "", files) # remove file extension

Here is how we do the same using the map function from the purrr package.

install.packages("purrr") # if package not already installed
library(purrr)
dat2 <- map(files, read.csv)
names(dat2) <- gsub("\\.csv", "", files)

So we see that map is like lapply. It takes a vector as input and applies a function to each element of the vector. map is one of the star functions in the purrr package.

Let’s say we want to find the mean Open price for each stock. Here is a base R way using lapply and an anonymous function:

lapply(dat, function(x)mean(x$Open))
$BA
[1] 177.8287

$IBM
[1] 174.3617

$JNJ
[1] 125.8409

We can do the same with map.

map(dat, function(x)mean(x$Open))
$BA
[1] 177.8287

$IBM
[1] 174.3617

$JNJ
[1] 125.8409

But map allows us to bypass the function function. Using a tilda (~) in place of function and a dot (.) in place of x, we can do this:

map(dat, ~mean(.$Open))

Furthermore, purrr provides several versions of map that allow you to specify the structure of your output. For example, if we want a vector instead of a list we can use the map_dbl function. The “_dbl” indicates that it returns a vector of type double (ie, numbers with decimals).

map_dbl(dat, ~mean(.$Open))
      BA      IBM      JNJ 
177.8287 174.3617 125.8409 

Now let’s say that we want to extract each stock’s Open price data. In other words, we want to go into each data frame in our list and pull out the Open column. We can do that with lapply as follows:

lapply(dat, function(x)x$Open)
$BA
 [1] 178.25 177.50 179.00 178.39 177.56 179.00 176.88 177.08 178.02 177.25 177.40 176.29 174.37 176.85 177.34 175.96 179.99
[18] 180.10 178.31 179.82 179.00 178.54 177.16

$IBM
 [1] 171.04 170.65 172.53 172.08 173.47 174.70 173.52 173.82 173.98 173.86 174.30 173.94 172.69 175.12 174.43 174.04 176.01
[18] 175.65 176.29 178.46 175.71 176.18 177.85

$JNJ
 [1] 124.54 124.26 124.87 125.12 124.85 124.72 124.51 124.73 124.11 124.74 125.05 125.62 125.16 125.86 126.10 127.05 128.38
[18] 128.04 128.45 128.44 127.05 126.86 125.83

Using map is a little easier. We just provide the name of the column we want to extract.

map(dat, "Open")
$BA
 [1] 178.25 177.50 179.00 178.39 177.56 179.00 176.88 177.08 178.02 177.25 177.40 176.29 174.37 176.85 177.34 175.96 179.99
[18] 180.10 178.31 179.82 179.00 178.54 177.16

$IBM
 [1] 171.04 170.65 172.53 172.08 173.47 174.70 173.52 173.82 173.98 173.86 174.30 173.94 172.69 175.12 174.43 174.04 176.01
[18] 175.65 176.29 178.46 175.71 176.18 177.85

$JNJ
 [1] 124.54 124.26 124.87 125.12 124.85 124.72 124.51 124.73 124.11 124.74 125.05 125.62 125.16 125.86 126.10 127.05 128.38
[18] 128.04 128.45 128.44 127.05 126.86 125.83

We often want to plot financial data. In this case we may want to plot Closing price for each stock and look for trends. We can do this with the base R function mapply. First we create a vector of stock names for plot labeling. Next we set up one row of three plotting regions. Then we use mapply to create the plot. The “m” in mapply means “multiple arguments”. In this case we have two arguments: the Closing price and the stock name. Notice that mapply requires the function come first and then the arguments.

stocks <- sub("\\.csv","", files)
par(mfrow=c(1,3))
mapply(function(x,y)plot(x$Close, type = "l", main = y), x = dat, y = stocks)

The purrr equivalent is map2. Again we can substitute a tilda (~) for function, but now we need to use .x and .y to identify the arguments. However the ordering is the same as map: data come first and then the function.

map2(dat, stocks, ~plot(.x$Close, type="l", main = .y))

Each time we run mapply or map2 above, the following is printed to the console:

$BA
NULL

$IBM
NULL

$JNJ
NULL

This is because both functions return a value. Since plot returns no value, NULL is printed. The purrr package provides walk for dealing with functions like plot. Here is the same task with walk2 instead of map2. It produces the plots and prints nothing to the console.

walk2(dat, stocks, ~plot(.x$Close, type="l", main = .y))

At some point we may want to collapse our list of three data frames into a single data frame. This means we’ll want to add a column to indicate which record belongs to which stock. Using base R this is a two step process. We do.call the rbind function to the elements of our list. Then we add a column called Stock by taking advantage of the fact that the row names of our data frame contain the name of the original list element, in this case the stock name.

datDF <- do.call(rbind, dat)
# add stock names to data frame
datDF$Stock <- gsub("\\.[0-9]*", "", rownames(datDF)) # remove period and numbers
head(datDF)
           Date   Open   High    Low  Close  Volume Adj.Close Stock
BA.1 2017-04-12 178.25 178.25 175.94 176.05 2920000    176.05    BA
BA.2 2017-04-11 177.50 178.60 176.96 178.57 2259700    178.57    BA
BA.3 2017-04-10 179.00 179.97 177.48 177.56 2259500    177.56    BA
BA.4 2017-04-07 178.39 179.09 177.26 178.85 2704700    178.85    BA
BA.5 2017-04-06 177.56 178.22 177.12 177.37 2343600    177.37    BA
BA.6 2017-04-05 179.00 180.18 176.89 177.08 2387100    177.08    BA

Using purrr, we could have used map_df instead of map with the read.csv function, but we would have lost the source file information.

dat2DF <- map_df(files, read.csv) # works, but which record goes with which stock?

We could also use purrr’s reduce function. That will collapse the list into a single data frame. But again we have no way of labeling which row came from which stock.

dat2DF <- reduce(dat, rbind) # works, but which record goes with which stock?

To accomplish this with purrr, we need to use the stocks vector we created earlier along with the map2_df function. This function applies a function to two arguments and returns a data frame. The function we want to apply is update_list, another purrr function. The update_list function allows you to add things to a list element, such as a new column to a data frame. Below we use the formula notation again and .x and .y to indicate the arguments. The result is a single data frame with a new Stock column.

dat2DF <- map2_df(dat2, stocks, ~update_list(.x, stock = .y))
head(dat2DF)
        Date   Open   High    Low  Close  Volume Adj.Close stock
1 2017-04-12 178.25 178.25 175.94 176.05 2920000    176.05    BA
2 2017-04-11 177.50 178.60 176.96 178.57 2259700    178.57    BA
3 2017-04-10 179.00 179.97 177.48 177.56 2259500    177.56    BA
4 2017-04-07 178.39 179.09 177.26 178.85 2704700    178.85    BA
5 2017-04-06 177.56 178.22 177.12 177.37 2343600    177.37    BA
6 2017-04-05 179.00 180.18 176.89 177.08 2387100    177.08    BA

Finally, we should consider reformatting the Date column as a Date instead of a Factor. The easiest way to deal with this would have been to use the read_csv function from the readr package instead of read.csv. But in the interest of demonstrating some more purrr functionality, let’s pretend we can’t do that. Further, let’s pretend we don’t know which columns are Factor, but we would like to convert them to Date if they are Factor. This time we give a purrr solution first.

To do this we nest one map function in another. The first one is dmap_if. dmap is just like map, except dmap returns a data frame. dmap_if allows us to define a condition to dictate whether or not we apply the function. In this case the condition is determined by is.factor. If is.factor returns TRUE, then we apply the ymd function from the lubridate package. Now dmap_if takes a data frame not a list, so we have to use map to apply dmap_if to each data frame in our list. The final code is as follows:

dat2 <- map(dat2, ~dmap_if(., is.factor, lubridate::ymd))

Doing this in base R is possible but far more difficult. We nest one lapply function inside another, but since lapply returns a list, we need to wrap the first lapply with as.data.frame. And within the first lapply we have to use the assignment operator as a function, which works but looks cryptic!

dat <- lapply(dat, 
              function(x)as.data.frame(
                lapply(x,
                       function(y)
                         if(is.factor(y)) 
                           `<-`(y, lubridate::ymd(y)) 
                       else y)))

This article provides just a taste of purrr. We hope it gets you started learning more about the package. Be sure to read the documentation as well. Each help page contains illustrative examples. Note that purrr is a very young package. At the time of this writing it is at version 0.2.2. There are sure to be improvements and changes in the coming months and years.

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu

Clay Ford
Statistical Research Consultant
University of Virginia Library
April 14, 2017

Endangered Data Week (April 17-21)

Endangered Data Week Logo

 

The UVa Library is hosting a number of events as part of Endangered Data Week (April 17-21), a new, collaborative effort, coordinated across campuses, nonprofits, libraries, citizen science initiatives, and cultural heritage institutions, to shed light on public datasets that are in danger of being deleted, repressed, mishandled, or lost.

Below is the current list of events we are hosting at UVa for Endangered Data Week. No reservations are required, and light refreshments will be provided, so please join us if you can! Click here for more information.

 

  • Introduction to Libra Data (Dataverse at UVa) – Monday, April 17th 11am-noon, in Brown Library, Room 133.
  • Introduction to Git/Github – Tuesday, April 18th, noon-1:30pm, in Brown Library, Room 133.
  • Introduction to DocNow – Tuesday, April 18th, 2pm-4pm, in Alderman Library, Room 421.
  • Web Scraping with R – Wednesday, April 19th, 10:30am-noon, in Brown Library, Room 133.
  • Preserving Artifact and Architecture with Cultural Heritage Informatics – Friday, April 21st, 10:30am-11:30am, Location TBD.
  • Endangered Data Week webinar – Friday, April 21st, 1pm-2:30pm, in Brown Library, Room 133.

Within the UVa Library, these events are being hosted by Research Data Services and the Scholars’ Lab.

For more information about other Endangered Data Week events around the country and around the world, please visit the Endangered Data Week website.

Fall 2017 Data Science Short Courses

The Library’s Research Data Services is offering 1-credit data science courses in Fall 2017 through the Data Science Institute: Data Wrangling and Exploration in R and a Public Interest Data Lab.


DS 6559-001 Data Wrangling and Exploration in R (1 credit, meets the first five weeks of the semester)
Clay Ford
T,R 12:30-1:45 from 8/22/2017-9/21/2017
New Cabell Hall 489

This course covers data exploration, cleaning, and manipulation in R. Topics include reading in/writing out data in various formats, R data structures, working with date/time data, character manipulation, using regular expressions in R, reshaping data, data transformations, data aggregation and basic data visualization to aid in data cleaning.

DS 5559-001 Public Interest Data Lab (1 credit, meets the first ten weeks of the semester)
Michele Claibourn
F 11:00-1:00 from 8/25/2017-11/3/2017 (no meeting on 9/29)
Clark Hall, Brown Library, Room 148

The lab course will provide experience working collaboratively, openly, and reproducibly on data science projects organized by the lab director — for example, working with local agencies to understand their data and improve processes, working with news data to help citizens better navigate a complex media environment. The goal is to provide students with an opportunity to enhance their data skills and to gain experience working as a team on a joint project while promoting social good.


To register search for Subject “DS” and Course Number “6559” or “5559” in SIS. Full-time employees of UVA can use their Education Benefit and register through the Community Scholar Program.

2017-18 StatLab Fellows

The UVA Library’s Research Data Services is seeking StatLab Fellows for the 2017-2018 academic year. Responding to growing interest in applied data science experience along with a developing movement to use data and data science for the public interest, the program provides up to four UVA graduate students experience working collaboratively, openly, and reproducibly on data science projects — for example, working with local agencies to understand their data and improve processes, or working with news data to help citizens better navigate a complex media environment. The goal is to provide graduate students with an opportunity to enhance their data skills and to gain experience working as a team on a joint project while promoting social good.

Program Description
The StatLab provides consultation, training, and support around data analysis and statistical methods, and data wrangling and visualization. Research across the social, natural, engineering, and data sciences increasingly draws upon sophisticated research designs, complex statistical methods, and computational power to draw inferences about the world around us. The StatLab sees its mission as both contributing to and assisting with a wide portfolio of quantitative research and Fellows will have the opportunity to contribute to academic consulting, as well, providing exposure to a variety of designs, challenges, and methods across disciplines.

Fellows will have the opportunity to deepen their knowledge of methods and data analysis tools by: working collaboratively on a project, writing and blogging about their work, consulting and working with researchers across disciplinary boundaries, and presenting their own work to a diverse audience.

Fellowship Details
StatLab Fellows are expected to work between 5-10 hours per week in the fall and spring semesters and will be paid $20/hour (to provide some flexibility and accommodate variable schedules across the semester). Fellows should be available for a Friday noon meeting most weeks, and will:

  • contribute intellectually and methodologically to a collaborative project in public and reproducible ways;
  • expand or deepen their methodological knowledge and skills;
  • write blog posts about the project and their work;
  • contribute to consultations and collaborations with researchers seeking help from the StatLab on ocassion;
  • and advance and present their own research projects.

Applicants should have completed at least two methods courses before applying and have experience in a statistical software environment such as R or Stata or in a computing environment such as Python. We encourage applications from women, people of color, LGBT students, first-generation college students and other under-represented groups.

How to Apply
Send a CV and cover letter by April 17 outlining:

  • your research interests;
  • your experience with data analysis, statistical methods, data wrangling, and visualization;
  • the skills you expect to bring to a collaborative project;
  • and a summary of what you hope to gain as a StatLab fellow.

Email a complete application package to Michele Claibourn, mclaibourn@virginia.edu. Questions about the StatLab Fellowship and the application process should also be directed to Michele Claibourn. You can learn more about the StatLab at our website.

Working with dates and time in R using the lubridate package

Sometimes we have data with dates and/or times that we want to manipulate or summarize. A common example in the health sciences is time-in-study. A subject may enter a study on Feb 12, 2008 and exit on November 4, 2009. How many days was the person in the study? (Don’t forget 2008 was a leap year; February had 29 days.) What was the median time-in-study for all subjects?

Another example are experiments that time participants performing an activity, applies a treatment to certain members, and then re-times the activity. What was the difference in times between subjects that received the treatment and those that did not? If our data is stored and read in as something like “01:23:03”, then we’ll need to convert to seconds.

The lubridate package for the R statistical computing environment was designed to help us deal with these kinds of data. The out-of-the-box base R installation also provides functions for working with dates and times, but the functions in the lubridate package are a little easier to use and remember.

Formatting dates

When we import data into R, dates and times are usually stored as character or factor by default due to symbols such as “-”, “:” and “/”. (Though see the readr package for functions that attempt to parse date and times automatically.) Using the str or class functions will tell you how they’re stored. If dates or times are stored as character or factor that means we can’t calculate or summarize elapsed times.

To format dates, lubridate provides a series of functions that are a permutation of the letters “m”, “d” and “y” to represent the ordering of month, day and year. For example, if our data has a column of dates such as May 11, 1996, our dates are ordered month-day-year. Therefore we would use the mdy function to transform the column to a date object. If our dates were in the order of, say, year-month-day, we would use the ymd function. lubridate provides functions for every permutation of “m”, “d”, “y”.

Let’s demonstrate. Below we generate two character vectors of dates, inspect their class, reformat them using the mdy function and then inspect their class again.

library(lubridate)
begin <- c("May 11, 1996", "September 12, 2001", "July 1, 1988")
end <- c("7/8/97","10/23/02","1/4/91")
class(begin)
## [1] "character"
class(end)
## [1] "character"
(begin <- mdy(begin))
## [1] "1996-05-11" "2001-09-12" "1988-07-01"
(end <- mdy(end))
## [1] "1997-07-08" "2002-10-23" "1991-01-04"
class(begin)
## [1] "Date"
class(end)
## [1] "Date"

The dates now have class “Date” and are printed in year-month-day format. They may appear to still be character data when printed, but they are in fact numbers. The “Date” class means dates are stored as the number of days since January 1, 1970, with negative values for earlier dates. We can use the as.numeric function to view the raw values.

as.numeric(begin)
## [1]  9627 11577  6756
as.numeric(end)
## [1] 10050 11983  7673

With dates stored in this fashion we can do things like subtraction to calculate number of days between two dates.

We can also format dates that contain time information by appending _h, _hm, or _hms to any of the aforementioned functions. “h”, “m”, and “s” stand for hour, minute, and second, respectively. Below we add some time data to our dates and demonstrate how to use mdy_hms.

begin <- c("May 11, 1996 12:05", "September 12, 2001 1:00", "July 1, 1988 3:32")
end <- c("7/8/97 8:00","10/23/02: 12:00","1/4/91 2:05")
(begin <- mdy_hm(begin))
## [1] "1996-05-11 12:05:00 UTC" "2001-09-12 01:00:00 UTC"
## [3] "1988-07-01 03:32:00 UTC"
(end <- mdy_hm(end))
## [1] "1997-07-08 08:00:00 UTC" "2002-10-23 12:00:00 UTC"
## [3] "1991-01-04 02:05:00 UTC"
class(begin)
## [1] "POSIXct" "POSIXt"
class(end)
## [1] "POSIXct" "POSIXt"

Notice the class is now “POSIXct”. “POSIXct” represents the number of seconds since the beginning of 1970. If a date is before 1970, the number of seconds is negative. Notice also the the letters “UTC” have been appended to the date-times. UTC is short for Universal Coordinated Time. You can read more about UTC here, but it’s basically the time standard by which the world regulates clocks. If we prefer we can specify a time zone when formatting dates by using the tz argument. Here’s how we can specify the Eastern Time Zone in the United States when formatting our dates.

begin <- c("May 11, 1996 12:05", "September 12, 2001 1:00", "July 1, 1988 3:32")
end <- c("7/8/97 8:00","10/23/02: 12:00","1/4/91 2:05")
(begin <- mdy_hm(begin, tz = "US/Eastern"))
## [1] "1996-05-11 12:05:00 EDT" "2001-09-12 01:00:00 EDT"
## [3] "1988-07-01 03:32:00 EDT"
(end <- mdy_hm(end, tz = "US/Eastern"))
## [1] "1997-07-08 08:00:00 EDT" "2002-10-23 12:00:00 EDT"
## [3] "1991-01-04 02:05:00 EST"

Notice the last date is EST instead of EDT. EST means “Eastern Standard Time”. EDT means “Eastern Daylight Time”. Any day and time that falls during Daylight Savings is EDT. Otherwise it’s EST. How do we know the appropriate time zone phrase to use in the tz argument? We can use the OlsonNames function to see a character vector of all time zone names. Just enter OlsonNames() in the R console and hit Enter.

We can also read in times without dates using the functions ms, hm, or hms, where again “h”, “m”, and “s” stand for “hours”, “minutes”, and “seconds”. Here are a few examples.

time1 <- c("1:13", "0:58", "1:01")
time2 <- c("12:23:11", "09:45:31", "12:05:22")
time3 <- c("2:14", "2:16", "3:35")

(time1 <- ms(time1))
## [1] "1M 13S" "58S"    "1M 1S"
(time2 <- hms(time2))
## [1] "12H 23M 11S" "9H 45M 31S"  "12H 5M 22S"
(time3 <- hm(time3))
## [1] "2H 14M 0S" "2H 16M 0S" "3H 35M 0S"

Once again, don’t be fooled by the print out. These times are actually stored as seconds. Use as.numeric to verify.

as.numeric(time1)
## [1] 73 58 61
as.numeric(time2)
## [1] 44591 35131 43522
as.numeric(time3)
## [1]  8040  8160 12900

The class of these new time objects is neither “Date” nor “POSIX” but rather “Period”.

class(time1)
## [1] "Period"
## attr(,"package")
## [1] "lubridate"

Period is one of three classes lubridate provides for time spans. Let’s learn more about these classes.

Durations, Intervals and Periods

lubridate provides three classes, or three different ways, to distinguish between different types of time spans.

  1. Duration
  2. Interval
  3. Period

Understanding these classes will help you get the most out of lubridate.

The most simple is Duration. This is simply a span of time measured in seconds. There is no start date.

An Interval is also measured in seconds but has an associated start date. An Interval measures elapsed seconds between two specific points in time.

A Period records a time span in units larger than seconds, such as years or months. Unlike seconds, years and months differ in time. June has 30 days while July has 31 days. February has 28 days except for leap years when it has 29 days. With the Period class, we can add 1 month to February 1 and get March 1. It allows us to perform calculations in calendar or clock time as opposed to absolute number of seconds.

Let’s see these three classes in action. Below we define two dates in the US Eastern time zone. The start day is March 11, 2017 at 5:21 AM. The end day is March 12, 2017 at the same time. Note that Daylight Savings begins (or began, depending on when you’re reading this) on March 12 at 2:00 AM.

start <- mdy_hm("3-11-2017 5:21", tz = "US/Eastern")
end <- mdy_hm("3-12-2017 5:21", tz = "US/Eastern")

Since we’re dealing with elapsed time between two dates, let’s start with Intervals. We can define an Interval using the %--% operator.

time.interval <- start %--% end
time.interval
## [1] 2017-03-11 05:21:00 EST--2017-03-12 05:21:00 EDT

Notice how Intervals print. They show the beginng date and end date. And also notice how the time zone changes from EST to EDT indicating that Daylight Savings has started. If we look at the structure of an Interval object we see it contains elapsed time in seconds, 82800, and the start date.

str(time.interval)
## Formal class 'Interval' [package "lubridate"] with 3 slots
##   ..@ .Data: num 82800
##   ..@ start: POSIXct[1:1], format: "2017-03-11 05:21:00"
##   ..@ tzone: chr "US/Eastern"

To create a Duration between these two dates, we can use the as.duration function.

time.duration <- as.duration(time.interval)
time.duration
## [1] "82800s (~23 hours)"

Notice a Duration object prints the elapsed time in seconds as well as something a little friendlier to read, in this case hours. Because Daylight Savings went into effect at 2:00 AM during the interval, an hour was skipped. Thus the duration between these two time points is only 23 hours.

If we look at the structure of a Duration object we see it just contains elapsed time in seconds.

str(time.duration)
## Formal class 'Duration' [package "lubridate"] with 1 slot
##   ..@ .Data: num 82800

We can create a Period from an Interval using the as.period function.

time.period <- as.period(time.interval)
time.period
## [1] "1d 0H 0M 0S"

A Period prints elapsed time as integers in the form of years, months, weeks, days and so on. Notice this Period is 1 day long. While only 23 hours have technically elapsed since the start date, according to our clock one day has elapsed.

If we look at the structure we see a Period contains several slots for “clock time” values and, like the Duration object, no associated date.

str(time.period)
## Formal class 'Period' [package "lubridate"] with 6 slots
##   ..@ .Data : num 0
##   ..@ year  : int 0
##   ..@ month : int 0
##   ..@ day   : int 1
##   ..@ hour  : int 0
##   ..@ minute: int 0

To recap:

  • An Interval is elapsed time in seconds between two specific dates. (If no time is provided, the time for each date is assumed to be 00:00:00, or midnight.)
  • A Duration is elapsed time in seconds independent of a start date.
  • A Period is elapsed time in “calendar” or “clock” time (4 weeks, 2 months, etc) independent of a start date.

Calculations and conversions

Once we format dates and define our time span we often want to do some calculations and conversions. For example, we may want to calculate the mean elapsed time in weeks for different groups.

Let’s create some data and demonstrate. First we enter arbitrary start and end dates and define an Interval

start <- c("2012-08-21", "2012-09-01", "2012-08-15", "2012-09-18")
end <- c("2012-09-16", "2012-09-06", "2012-08-22", "2012-10-11")
elapsed.time <- start %--% end

If we view the elapsed.time object we’ll just see date ranges. We can use as.duration or even as.numeric to view the elapsed time in seconds but that’s not very useful in this case. It would be better if we converted seconds to another unit of time such as weeks or days. Fortunately lubridate makes this easy.

The trick is to convert intervals to durations and then divide the duration by a duration object in the units we desire. That’s a mouthful but easy to demonstrate. Below we demonstrate how to convert to weeks. First we convert our interval to a duration, and then we divide by dweeks(1). The function call dweeks(1) generates a duration of one week in seconds, which is 604800. Dividing that into our duration returns number of weeks.

as.duration(elapsed.time) / dweeks(1)
## [1] 3.7142857 0.7142857 1.0000000 3.2857143

We can do the same with hours, days, minutes and years.

as.duration(elapsed.time) / dhours(1)
## [1] 624 120 168 552
as.duration(elapsed.time) / ddays(1)
## [1] 26  5  7 23
as.duration(elapsed.time) / dminutes(1)
## [1] 37440  7200 10080 33120
as.duration(elapsed.time) / dyears(1)
## [1] 0.07123288 0.01369863 0.01917808 0.06301370

Once we have the durations in the units we want, we can then do things like find the mean.

mean(as.duration(elapsed.time) / dweeks(1))
## [1] 2.178571

Of course this was just for demonstration. With only 4 values, the mean is not a very useful summary.

As another example, consider the following vector of character data summarizing a duration of time. “12w” means 12 weeks and “4d” means 4 days.

StudyTime <- c("12w 4d", "11w", "10w 5d", NA, "12w 6d")

What if we wanted to convert that to numeric weeks? First we’ll give the R code and them explain how it works.

as.duration(period(StudyTime, units = c("week","day"))) / dweeks(1)
## [1] 12.57143 11.00000 10.71429       NA 12.85714

First we use the period function to define a Period using our data. The units argument says the first part of our data represents weeks and the second part represents days. That is then converted to a Duration object that stores time in seconds. Finally we divide by dweeks(1) to convert seconds to weeks. Notice how the NA remains NA and that “11w” converts to 11 just fine even though it had no days appended to it.

There is much more to the lubridate package. Read the vignette and check out the examples on each function’s help page. But hopefully the material in this post gets you started with reading in dates, creating time-spans, and making conversions and calculations.

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu

Clay Ford
Statistical Research Consultant
University of Virginia Library
January 11, 2017

The Wilcoxon Rank Sum Test

The Wilcoxon Rank Sum Test is often described as the non-parametric version of the two-sample t-test. You sometimes see it in analysis flowcharts after a question such as “is your data normal?” A “no” branch off this question will recommend a Wilcoxon test if you’re comparing two groups of continuous measures.

So what is this Wilcoxon test? What makes it non-parametric? What does that even mean? And how do we implement it and interpret it? Those are some of the questions we aim to address in this post.

First, let’s recall the assumptions of the two-sample t test for comparing two population means:

1. The two samples are independent of one another
2. The two populations have equal variance or spread
3. The two populations are normally distributed

There’s no getting around #1. That assumption must be satisfied for a two-sample t-test. When assumptions #2 and #3 (equal variance and normality) are not satisfied but the samples are large (say, greater than 30), the results are approximately correct. But when our samples are small and our data skew or non-normal, we probably shouldn’t place much faith in the two-sample t-test.

This is where the Wilcoxon Rank Sum Test comes in. It only makes the first two assumptions of independence and equal variance. It does not assume our data have have a known distribution. Known distributions are described with math formulas. These formulas have parameters that dictate the shape and/or location of the distribution. For example, variance and mean are the two parameters of the Normal distribution that dictate its shape and location, respectively. Since the Wilcoxon Rank Sum Test does not assume known distributions, it does not deal with parameters, and therefore we call it a non-parametric test.

Whereas the null hypothesis of the two-sample t test is equal means, the null hypothesis of the Wilcoxon test is usually taken as equal medians. Another way to think of the null is that the two populations have the same distribution with the same median. If we reject the null, that means we have evidence that one distribution is shifted to the left or right of the other. Since we’re assuming our distributions are equal, rejecting the null means we have evidence that the medians of the two populations differ. The R statistical programming environment, which we use to implement the Wilcoxon rank sum test below, refers to this a “location shift”.

wilcox_01

Let’s work a quick example in R. The data below come from Hogg & Tanis, example 8.4-6. It involves the weights of packaging from two companies selling the same product. We have 8 observations from each company, A and B. We would like to know if the distribution of weights is the same at each company. A quick boxplot reveals the data have similar spread but may be skew and non-normal. With such a small sample it might be dangerous to assume normality.

A <- c(117.1, 121.3, 127.8, 121.9, 117.4, 124.5, 119.5, 115.1)
B <- c(123.5, 125.3, 126.5, 127.9, 122.1, 125.6, 129.8, 117.2)
dat <- data.frame(weight = c(A,B), 
                  company = rep(c("A","B"), each=8))
boxplot(weight ~ company, data = dat)

wilcox_02

Now we run the Wilcoxon Rank Sum Test using the wilcox.test function. Again, the null is that the distributions are the same, and hence have the same median. The alternative is two-sided. We have no idea if one distribution is shifted to the left or right of the other.

wilcox.test(weight ~ company, data = dat)

	Wilcoxon rank sum test

data:  weight by company
W = 13, p-value = 0.04988
alternative hypothesis: true location shift is not equal to 0

First we notice the p-value is a little less than 0.05. Based on this result we may conclude the medians of these two distributions differ. The alternative hypothesis is stated as the “true location shift is not equal to 0”. That’s another way of saying “the distribution of one population is shifted to the left or right of the other,” which implies different medians.

The Wilcoxon statistic is returned as W = 13. This is NOT an estimate of the difference in medians. This is actually the number of times that a package weight from company B is less than a package weight from company A. We can calculate it by hand using nested for loops as follows (though we should note that this is not how the wilcox.test function calculates W):

W <- 0
for(i in 1:length(B)){
  for(j in 1:length(A)){
    if(B[j] < A[i]) W <- W + 1
  }
}
W
[1] 13

Another way to do this is to use the outer function, which can take two vectors and perform an operation on all pairs. The result is an 8 x 8 matrix consisting of TRUE/FALSE values. Using sum on the matrix counts all instances of TRUE.

sum(outer(B, A, "<"))
[1] 13

Of course we could also go the other way and count the number of times that a package weight from company A is less than a package weight from company B. This gives us 51.

sum(outer(A, B, "<"))
[1] 51

If we relevel our company variable in data.frame dat to have “B” as the reference level, we get the same result in the wilcox.test output.

dat$company <- relevel(dat$company, ref = "B")
wilcox.test(weight ~ company, data = dat)

	Wilcoxon rank sum test

data:  weight by company
W = 51, p-value = 0.04988
alternative hypothesis: true location shift is not equal to 0

So why are we counting pairs? Recall this is a non-parametric test. We’re not estimating parameters such as a mean. We’re simply trying to find evidence that one distribution is shifted to the left or right of the other. In our boxplot above, it looks like the distributions from both companies are reasonably similar but with B shifted to the right, or higher, than A. One way to think about testing if the distributions are the same is to consider the probability of a randomly selected observation from company A being less than a randomly selected observation from company B: P(A < B). We could estimate this probability as the number of pairs with A less than B divided by the total number of pairs. In our case that comes to \(51/(8\times8)\) or \(51/64\). Likewise we could estimate the probability of B being less than A. In our case that's \(13/64\). So we see that the statistic W is the numerator in this estimated probability.

The exact p-value is determined from the distribution of the Wilcoxon Rank Sum Statistic. We say "exact" because the distribution of the Wilcoxon Rank Sum Statistic is discrete. It is parametrized by the two sample sizes we're comparing. "But wait, I thought the Wilcoxon test was non-parametric?" It is! But the test statistic W has a distribution which does not depend on the distribution of the data.

We can calculate the exact two-sided p-values explicitly using the pwilcox function (they’re two-sided, so we multiply by 2):

For W = 13, \(P(W \leq 13)\):

pwilcox(q = 13, m = 8, n = 8) * 2
[1] 0.04988345

For W = 51, \(P(W \geq 51)\), we have to get \(P(W \leq 50)\) and then subtract from 1 to get \(P(W \geq 51)\):

(1 - pwilcox(q = 51 - 1, m = 8, n = 8)) * 2
[1] 0.04988345

By default the wilcox.test function will calculate exact p-values if the samples contains less than 50 finite values and there are no ties in the values. (More on “ties” in a moment.) Otherwise a normal approximation is used. To force the normal approximation, set exact = FALSE.

dat$company <- relevel(dat$company, ref = "A")
wilcox.test(weight ~ company, data = dat, exact = FALSE)

	Wilcoxon rank sum test with continuity correction

data:  weight by company
W = 13, p-value = 0.05203
alternative hypothesis: true location shift is not equal to 0

When we use the normal approximation the phrase “with continuity correction” is added to the name of the test. A continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution. The normal approximation is very good and computationally faster for samples larger than 50.

Let’s return to “ties”. What does that mean and why does that matter? To answer those questions first consider the name “Wilcoxon Rank Sum test”. The name is due to the fact that the test statistic can be calculated as the sum of the ranks of the values. In other words, take all the values from both groups, rank them from lowest to highest according to their value, and then sum the ranks from one of the groups. Here’s how we can do it in R with our data:

sum(rank(tins$weight)[tins$company=="A"])
[1] 49

Above we rank all the weights using the rank function, select only those ranks for company A, and then sum them. This is the classic way to calculate the Wilcoxon Rank Sum test statistic. Notice it doesn’t match the test statistic provided by wilcox.test, which was 13. That’s because R is using a different calculation due to Mann and Whitney. Their test statistic, sometimes called U, is a linear function of the original rank sum statistic, usually called W:

\[U = W – \frac{n_2(n_2 + 1)}{2}\]

where \(n_2\) is the number of observations in the other group whose ranks were not summed. We can verify this relationship for our data

sum(rank(tins$weight)[tins$company=="A"]) - (8*9/2)
[1] 13

This is in fact how the wilcox.test function calculates the test statistic, though it labels it W instead of U.

The rankings of values have to be modified in the event of ties. For example, in the data below 7 occurs twice. One of the 7’s could be ranked 3 and the other 4. But then one would be ranked higher than the other and that’s not correct. We could rank them both 3 or both 4, but that wouldn’t be right either. What we do then is take the average of their ranks. Below this is \((3 + 4)/2 = 3.5\). R does this by default when ranking values.

vals <- c(2, 4, 7, 7, 12)
rank(vals)
[1] 1.0 2.0 3.5 3.5 5.0

The impact of ties means the Wilcoxon rank sum distribution cannot be used to calculate exact p-values. If ties occur in our data and we have fewer than 50 observations, the wilcox.test function returns a normal approximated p-value along with a warning message that says “cannot compute exact p-value with ties”.

Whether exact or approximate, p-values do not tell us anything about how different these distributions are. For the Wilcoxon test, a p-value is the probability of getting a test statistic as large or larger assuming both distributions are the same. In addition to a p-value we would like some estimated measure of how these distributions differ. The wilcox.test function provides this information when we set conf.int = TRUE.

wilcox.test(weight ~ company, data = dat, conf.int = TRUE)

	Wilcoxon rank sum test

data:  weight by company
W = 13, p-value = 0.04988
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
 -8.5 -0.1
sample estimates:
difference in location 
                 -4.65 

This returns a “difference in location” measure of -4.65. The documentation for the wilcox.test function states this “does not estimate the difference in medians (a common misconception) but rather the median of the difference between a sample from x and a sample from y.”

Again we can use the outer function to verify this calculation. First we calculate the difference between all pairs and then find the median of those differences.

median(outer(A,B,"-"))
[1] -4.65

The confidence interval is fairly wide due to the small sample size, but it appears we can safely say the median weight of company A’s packaging is at least -0.1 less than the median weight of company B’s packaging.

If we’re explicitly interested in the difference in medians between the two populations, we could try a bootstrap approach using the boot package. The idea is to resample the data (with replacement) many times, say 1000 times, each time taking a difference in medians. We then take the median of those 1000 differences to estimate the difference in medians. We can then find a confidence interval based on our 1000 differences. An easy way is to use the 2.5th and 97.5th percentiles as the upper and lower bounds of a 95% confidence interval.

Here is one way to carry this out in R.

First we load the boot package, which comes with R, and create a function called med.diff to calculate the difference in medians. In order to work with the boot package’s boot function, our function needs two arguments: one for the data and one to index the data. We have arbitrarily named these arguments d and i. The boot function will take our data, d, and resample it according to randomly selected row numbers, i. It will then return the difference in medians for the resampled data.

library(boot)
med.diff <- function(d, i) {
   tmp <- d[i,] 
   median(tmp$weight[tmp$company=="A"]) - 
     median(tmp$weight[tmp$company=="B"])
 }

Now we use the boot function to resample our data 1000 times, taking a difference in medians each time, and saving the results into an object called boot.out.

boot.out <- boot(data = dat, statistic = med.diff, R = 1000)

The boot.out object is a list object. The element named “t” contains the 1000 differences in medians. Taking the median of those values gives us a point estimate of the estimated difference in medians. Below we get -5.05, but you will likely get something different.

median(boot.out$t)
[1] -5.05

Next we use the boot.ci function to calculate confidence intervals. We specify type = "perc" to obtain the bootstrap percentile interval.

boot.ci(boot.out, type = c("perc","bca"))
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL : 
boot.ci(boot.out = boot.out, type = "perc")

Intervals : 
Level     Percentile     
95%   (-9.399, -0.100 )  
Calculations and Intervals on Original Scale

We notice the interval is not too different from what the wilcox.test function returned, but certainly bigger on the lower bound. Like the Wilcoxon rank sum test, bootstrapping is a non-parametric approach that can be useful for small and/or non-normal data.

References

Hogg, R.V. and Tanis, E.A., Probability and Statistical Inference, 7th Ed, Prentice Hall, 2006.
R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu

Clay Ford
Statistical Research Consultant
University of Virginia Library
Jan 5, 2017

Pairwise comparisons of proportions

Pairwise comparison means comparing all pairs of something. If I have three items A, B and C, that means comparing A to B, A to C, and B to C. Given n items, I can determine the number of possible pairs using the binomial coefficient:

$$ \frac{n!}{2!(n – 2)!} = \binom {n}{2}$$

Using the R statistical computing environment, we can use the choose function to quickly calculate this. For example, how many possible 2-item combinations can I “choose” from 10 items:

choose(10,2)
[1] 45

We sometimes want to make pairwise comparisons to see where differences occur. Let’s say we go to 8 high schools in an area, survey 30 students at each school, and ask them whether or not they floss their teeth at least once a day. When finished we’ll have 8 proportions of students who answered “Yes”. An obvious first step would be to conduct a hypothesis test for any differences between these proportions. The null would be no difference between the proportions versus some difference. If we reject the null, we have evidence of differences. But where are the differences? This leads us to pairwise comparisons of proportions, where we make multiple comparisons. The outcome of these pairwise comparisons will hopefully tell us which schools have significantly different proportions of students flossing.

Making multiple comparisons leads to an increased chance of making a false discovery, i.e. rejecting a null hypothesis that should not have been rejected. When we run a hypothesis test, we always run a risk of finding something that isn’t there. Think of flipping a fair coin 10 times and getting 9 or 10 heads (or 0 or 1 heads). That’s improbable but not impossible. If it happened to us we may conclude the coin is unfair, but that would be the wrong conclusion if the coin truly was fair. It just so happened we were very unlucky to witness such an unusual event. As we said, the chance of this happening is low in a single trial, but we increase our chances of it happening by conducting multiple trials.

The probability of observing 0, 1, 9 or 10 heads when flipping a fair coin 10 times is about 2% which can be calculated in R as follows:

pbinom(q = 1, size = 10, prob = 0.5) * 2
[1] 0.02148438

Therefore the the probability of getting 2 – 8 heads is about 98%:

1 - pbinom(q = 1, size = 10, prob = 0.5) * 2
[1] 0.9785156

The probability of getting 2 – 8 heads in 10 trials is 98% multiplied by itself 10 times:

(1 - pbinom(q = 1, size = 10, prob = 0.5) * 2)^10
[1] 0.8047809

Therefore the probability of getting 0, 1, 9, or 10 heads in 10 trials is now about 20%:

1 - (1 - pbinom(q = 1, size = 10, prob = 0.5) * 2)^10
[1] 0.1952191

We can think of this as doing multiple hypothesis tests. Flip 10 coins 10 times each, get the proportion of heads for each coin, and use 10 one-sample proportion tests to statistically determine if the results we got are consistent with a fair coin. In other words, do we get any p-values less than, say, 0.05?

We can simulate this in R. First we replicate 1,000 times the act of flipping 10 fair coins 10 times each and counting the number of heads using the rbinom function. This produces a 10 x 1000 matrix of results that we save as “coin.flips”. We then apply a function to each column of the matrix that runs 10 one-sample proportion tests using the prop.test function and saves a TRUE/FALSE value if any of the p-values are less than 0.05 (we talk more about the prop.test function below). This returns a vector we save as “results” that contains TRUE or FALSE for each replicate. R treats TRUE and FALSE as 0 or 1, so calling mean on results returns the proportion of TRUEs in the vector. We get about 20%, confirming our calculations. (If you run the code below you’ll probably get a slightly different but similar answer.)

trials <- 10
coin.flips <- replicate(1000, rbinom(n = 10, size = trials, prob = 0.5))

multHypTest <- function(x){
  pvs <- sapply(x, function(x)prop.test(x = x, n = trials, p = 0.5)$p.value)
  any(pvs < 0.05)
}

results <- apply(coin.flips,2,multHypTest)
mean(results)
[1] 0.206

That’s just for 10 trials. What about 15 or 20 or more? You can re-run the code above with trials set to a different value. We can also visualize it by plotting the probability of an unusual result (0, 1, 9, or 10 heads) versus the number trials. Notice how rapidly the probability of a false discovery increases with the number of trials.

curve(expr = 1 - (1 - pbinom(q = 1, size = 10, prob = 0.5) * 2)^x, 
      xlim = c(1,50),
      xlab = "Number of tests",
      ylab = "Probability of 0, 1, 9, or 10 heads")

mult_test_fdr_rate

So what does all of this tell us? It reveals that traditional significance levels such as 0.05 are too high when conducting multiple hypothesis tests. We need to either adjust our significance level or adjust our p-values. As we’ll see, the usual approach is to adjust the p-values using one of several methods for p-value adjustment.

Let’s return to our example of examining the proportion of high school students (sample size 30 at each school) who floss at 8 different high schools. We’ll simulate this data as if the true proportion is 30% at each school (i.e., no difference). We use set.seed to make the data reproducible.

set.seed(15)
n <- 30
k <- 8
school <- rep(1:k, each = n)
floss <- replicate(k, sample(x = c("Y","N"), 
                             size = n, 
                             prob = c(0.3, 0.7), 
                             replace = TRUE))
dat <- data.frame(school, floss = as.vector(floss))

With our data generated, we can tabulate the number of Yes and No responses at each school:

flossTab <- with(dat, table(school, floss))
flossTab
      floss
school  N  Y
     1 18 12
     2 19 11
     3 14 16
     4 19 11
     5 26  4
     6 15 15
     7 20 10
     8 21  9

Using prop.table we can determine the proportions. Specifying margin = 1 means proportions are calculated across the rows for each school. (We also round to two decimal places for presentation purposes.) The second column contains the proportion of students who answered Yes at each school.

round(prop.table(flossTab, margin = 1),2)
      floss
school    N    Y
     1 0.60 0.40
     2 0.63 0.37
     3 0.47 0.53
     4 0.63 0.37
     5 0.87 0.13
     6 0.50 0.50
     7 0.67 0.33
     8 0.70 0.30

First we might want to run a test to see if we can statistically conclude that not all proportions are equal. We can do this with the prop.test function. The prop.test function requires that Yes (or “success”) counts be in the first column of a table and No (or “failure”) counts in the second column. Thus we switch the columns using subsetting brackets with a vector indicating column order.

prop.test(flossTab[,c("Y","N")])

	8-sample test for equality of proportions without continuity correction

data:  flossTab[, c("Y", "N")]
X-squared = 13.78, df = 7, p-value = 0.05524
alternative hypothesis: two.sided
sample estimates:
   prop 1    prop 2    prop 3    prop 4    prop 5    prop 6    prop 7    prop 8 
0.4000000 0.3666667 0.5333333 0.3666667 0.1333333 0.5000000 0.3333333 0.3000000 

The p-value of 0.055 is borderline significant and indicates some evidence of differences among proportions. We generated the data so we know there actually is no difference! But if this were real data that we had spent considerable resources collecting, we might be led to believe (perhaps even want to believe) some differences indeed exist. That p-value is so close to significance! School #5, in particular, with a proportion of 13% looks far lower than school #3 with 53%. We could conclude this hypothesis test is significant at 0.10 level and proceed to pairwise comparisons.

To do that in R we use the pairwise.prop.test function which requires a table in the same format as prop.test, Yes counts in the first column and No counts in the second column:

pairwise.prop.test(x = flossTab[,c("Y","N")])

	Pairwise comparisons using Pairwise comparison of proportions 

data:  flossTab[, c("Y", "N")] 

  1     2     3     4     5     6     7    
2 1.000 -     -     -     -     -     -    
3 1.000 1.000 -     -     -     -     -    
4 1.000 1.000 1.000 -     -     -     -    
5 1.000 1.000 0.073 1.000 -     -     -    
6 1.000 1.000 1.000 1.000 0.149 -     -    
7 1.000 1.000 1.000 1.000 1.000 1.000 -    
8 1.000 1.000 1.000 1.000 1.000 1.000 1.000

P value adjustment method: holm 

This produces a table of 28 p-values since there are 28 possible pairs between 8 items. We interpret the table by using row and column numbers to find the p-value for a particular pair. For example the p-value of 0.073 at the intersection of row 5 and column 3 is the p-value for the two-sample proportion test between school #5 and school #3. It appears to be insignificant at the traditional 5% level. All other p-values are clearly insignificant. In fact, most are 1. This is due to the p-value adjustment that was made. The output tells us the “holm” method was used. We won’t get into the details of how this method works, but suffice to say it increases the p-values in an effort to adjust for the many comparisons being made. In this case, it does what it’s supposed to: it adjusts the p-values and allows us to make a good case there is no differences between schools, at least not at the 5% level, which would be the correct decision.

We can do pairwise comparisons without adjusted p-values by setting p.adjust.method = "none". Let’s do that and see what happens:

# NOTE: This analysis is wrong!
pairwise.prop.test(x = flossTab[,c("Y","N")], p.adjust.method = "none")

	Pairwise comparisons using Pairwise comparison of proportions 

data:  flossTab[, c("Y", "N")] 

  1      2      3      4      5      6      7     
2 1.0000 -      -      -      -      -      -     
3 0.4376 0.2993 -      -      -      -      -     
4 1.0000 1.0000 0.2993 -      -      -      -     
5 0.0410 0.0736 0.0026 0.0736 -      -      -     
6 0.6038 0.4345 1.0000 0.4345 0.0055 -      -     
7 0.7888 1.0000 0.1927 1.0000 0.1270 0.2949 -     
8 0.5883 0.7842 0.1161 0.7842 0.2100 0.1876 1.0000

P value adjustment method: none 

Notice now we have significant differences for 3 pairs: (5,1), (5,3), and (6,5). Again we know this is wrong because we simulated the data. The truth is all schools have a floss rate of 30%. But we see that through random chance and not adjusting our p-values for multiple testing we got what look to be significant results. This illustrates the importance of using adjusted p-values when making multiple comparisons.

There are other p-value adjustment methods available. A common and conservative choice is the bonferroni method. It simply multiplies all p-values by the number of pairs. In our example that is 28. To see all p-value adjustment methods available in R enter ?p.adjust at the console.

For questions or clarifications regarding this article, contact the UVa Library StatLab: statlab@virginia.edu

Clay Ford
Statistical Research Consultant
University of Virginia Library
October 20, 2016

Welcome Meagan

Meagan Christensen joined our Social, Natural, and Engineering Sciences team in August as our new Social Science Librarian, and we are thrilled to have her with us!

christensenRDSMeagan will be working with the Economics, Politics, Psychology, and Sociology departments, engaging with students and faculty on their research and information needs — in the classroom, through consultations, and online — and helping these communities navigate resources both within the Library and across the University.

Meagan earned her MLIS from the University of Washington and her B.A. in Psychology and Sociology (with a minor in French) from the University of Portland. She joined UVA in February 2014 as the Online and Distance Learning Librarian, and has also worked as part of the Teaching and Learning team in Academic Engagement.

You can find Meagan in the Library Data Commons at Curry (Ruffner 302). Or email her at mck6n@virginia.edu.

Stata Basics: foreach and forvalues

There are times we need to do some repetitive tasks in the process of data preparation, analysis or presentation, for instance, computing a set of variables in a same manner, rename or create a series of variables, or repetitively recode values of a number of variables. In this post, I show a few of simple example “loops” using Stata commands -foreach-, -local- and -forvalues- to handle some common simple repetitive tasks.

-foreach-: loop over items

Consider this sample dataset of monthly average temperature for three years.

 
* input data
> clear
> input year mtemp1-mtemp12

          year     mtemp1     mtemp2     mtemp3     mtemp4     mtemp5     mtemp6     mtemp7     mtemp8     mtemp9    mtemp10    mtemp11    mtemp12
  1. 2013 4 3 5 14 18 23 25 22 19 15 7 6
  2. 2014 -1 3 5 13 19 23 24 23 21 15 7 5
  3. 2015 2 -1 7 14 21 24 25 24 21 14 11 10
  4. end

Now the mean temperatures of each month are in Centigrade, if we want to convert them to Fahrenheit, we could do the computation for the 12 variables.

generate fmtemp1 = mtemp1*(9/5)+32
generate fmtemp2 = mtemp1*(9/5)+32
generate fmtemp3 = mtemp1*(9/5)+32
generate fmtemp4 = mtemp1*(9/5)+32
generate fmtemp5 = mtemp1*(9/5)+32
generate fmtemp6 = mtemp1*(9/5)+32
generate fmtemp7 = mtemp1*(9/5)+32
generate fmtemp8 = mtemp1*(9/5)+32
generate fmtemp9 = mtemp1*(9/5)+32
generate fmtemp10 = mtemp1*(9/5)+32
generate fmtemp11 = mtemp1*(9/5)+32
generate fmtemp12 = mtemp1*(9/5)+32

However this takes a lot of typing. Alternatively, we can use the -foreach- command to achieve the same goal. In the following codes, we tell Stata to do the same thing (the computation: c*9/5+32) for each of the variable in the varlist – mtemp1 to mtemp12.

> foreach v of varlist mtemp1-mtemp12 {
    generate f`v' = `v'*(9/5)+32
  } 
 
* list variables
> ds
year      mtemp3    mtemp6    mtemp9    mtemp12   fmtemp3   fmtemp6   fmtemp9   fmtemp12
mtemp1    mtemp4    mtemp7    mtemp10   fmtemp1   fmtemp4   fmtemp7   fmtemp10
mtemp2    mtemp5    mtemp8    mtemp11   fmtemp2   fmtemp5   fmtemp8   fmtemp11

Note that braces must be specified with -foreach-. The open brace has to be on the same line as the foreach, and the close brace must be on a line by itself. It’s crucial to close loops properly, especially if you have one or more loops nested in another loop.

-local-: define macro

This was a rather simple repetitive task which can be handled solely by the foreach command. Here we introduce another command -local-, which is utilized a lot with commands like foreach to deal with repetitive tasks that are more complex. The -local- command is a way of defining macro in Stata. A Stata macro can contain multiple elements; it has a name and contents. Consider the following two examples:

 
* define a local macro called month
> local month jan feb mar apr

> display `"`month'"' 
jan feb mar apr

Define a local macro called mcode and another called month, alter the contents of mcode in the foreach loop, then display them in a form of “mcode: month”.

> local mcode 0
> local month jan feb mar apr
> foreach m of local month {
    local mcode = `mcode' + 1
    display "`mcode': `m'"
   }
1: jan
2: feb
3: mar
4: apr

Note when you call a defined macro, it has to be wrapped in “`” (left tick) and “‘” (apostrophe) symbols.

Rename multiple variables

Take the temperature dataset we created as an example. Let’s say we want to rename variables mtemp1-mtemp12 as mtempjan-mtenpdec. We can do so by just tweaking a bit of the codes in the previous example.

Define local macro mcode and month, then rename the 12 vars in the foreach loop.

> local mcode 0
> local month jan feb mar apr may jun jul aug sep oct nov dec
> foreach m of local month {
    local mcode = `mcode' + 1
    rename mtemp`mcode' mtemp`m'
  }
> ds
year      mtempmar  mtempjun  mtempsep  mtempdec  fmtemp3   fmtemp6   fmtemp9   fmtemp12
mtempjan  mtempapr  mtempjul  mtempoct  fmtemp1   fmtemp4   fmtemp7   fmtemp10
mtempfeb  mtempmay  mtempaug  mtempnov  fmtemp2   fmtemp5   fmtemp8   fmtemp11

We can obtain the same results in a slightly different way. This time we use another 12 variables fmtemp1-fmtemp12 as examples. Again, we will rename them as fmtempjan-fmtempdec.

Define local macro month, then define local macro monthII in the foreach loop with specifying the string function word to reference the contents of the local macro month.

      
> local month jan feb mar apr may jun jul aug sep oct nov dec
> foreach n of numlist 1/12 {
    local monthII: word `n' of `month'
    display "`monthII'"
    rename fmtemp`n' fmtemp`monthII'   
  } 
jan
feb
mar
apr
may
jun
jul
aug
sep
oct
nov
dec

> ds
year       mtempmar   mtempjun   mtempsep   mtempdec   fmtempmar  fmtempjun  fmtempsep  fmtempdec
mtempjan   mtempapr   mtempjul   mtempoct   fmtempjan  fmtempapr  fmtempjul  fmtempoct
mtempfeb   mtempmay   mtempaug   mtempnov   fmtempfeb  fmtempmay  fmtempaug  fmtempnov

I usually run -display- to see how the macro looks like before actually applying the defined macro on tasks like changing variable names, just to make sure I don’t accidentally change them to some undesired results or even cause errors; however the display line is not necessary in this case.

Here we rename them back to fmtemp1-fmtemp12.

> local mcode 0
> foreach n in jan feb mar apr may jun jul aug sep oct nov dec {
    local mcode = `mcode' + 1
    rename fmtemp`n' fmtemp`mcode'
  }

> ds
year      mtempmar  mtempjun  mtempsep  mtempdec  fmtemp3   fmtemp6   fmtemp9   fmtemp12
mtempjan  mtempapr  mtempjul  mtempoct  fmtemp1   fmtemp4   fmtemp7   fmtemp10
mtempfeb  mtempmay  mtempaug  mtempnov  fmtemp2   fmtemp5   fmtemp8   fmtemp11

-forvalues-: loop over consecutive values

The -forvalues- command is another command that gets to be used a lot in handling repetitive works. Consider the same temperature dataset we created, suppose we would like to generate twelve dummy variables (warm1-warm12) to reflect if each of the monthly average temperature is higher than the one in the previous year. For example, I will code warm1 for the year of 2014 as 1 if the value of fmtemp1 for 2014 is higher than the value for 2013. I will code all the warm variables as 99 for the year of 2013, since they don’t have references to compare in this case.

We can do this by running the following codes, then repeat them for twelve times to create the twelve variables warm1-warm12.

 
* _n creates sequences of numbers. Type "help _n" for descriptions and examples.
> generate warm1=1 if fmtemp1 > fmtemp1[_n-1]
(2 missing values generated)

> replace warm1=0 if fmtemp1 <= fmtemp1[_n-1]
(2 real changes made)

> replace warm1=99 if year==2013
(1 real change made)

> list year fmtemp1 warm1, clean

       year   fmtemp1   warm1  
  1.   2013      39.2      99  
  2.   2014      30.2       0  
  3.   2015      35.6       1  

However this takes a lot of typing and may even create unwanted mistakes in the process of typing or copy-paste them over and over.

 
* drop warm1 we generated
> drop warm1

Instead, we can use -forvalues- to do so:

> forvalues i=1/12 {
    generate warm`i'=1 if fmtemp`i' > fmtemp`i'[_n-1]
    replace warm`i'=0 if fmtemp`i' <= fmtemp`i'[_n-1]
    replace warm`i'=99 if year==2013
  }
 
* see the results
> list year fmtemp1-fmtemp3 warm1-warm3, clean 

       year   fmtemp1   fmtemp2   fmtemp3   warm1   warm2   warm3  
  1.   2013      39.2      37.4        41      99      99      99  
  2.   2014      30.2      37.4        41       0       0       0  
  3.   2015      35.6      30.2      44.6       1       0       1  

Reference
Baum, C. (2005). A little bit of Stata programming goes a long way… Working Papers in Economics, 69.

 

Yun Tai
CLIR Postdoctoral Fellow
University of Virginia Library