Occasionally we are asked to help students or faculty implement a mixed-effect model in SPSS. Our training and expertise is primarily in R, so it can be challenging to transfer and apply our knowledge to SPSS. In this article we document for posterity how to fit some basic mixed-effect models in R using the lme4 […]
Comparing the accuracy of two binary diagnostic tests in a paired study design
There are many medical tests for detecting the presence of a disease or condition. Some examples include tests for lesions, cancer, pregnancy, or COVID-19. While these tests are usually accurate, they’re not perfect. In addition, some tests are designed to detect the same condition, but use a different method. A recent example are PCR and […]
Correlation of Fixed Effects in lme4
If you have ever used the R package lme4 to perform mixed-effect modeling you may have noticed the “Correlation of Fixed Effects” section at the bottom of the summary output. This article intends to shed some light on what this section means and how you might interpret it. To begin, let’s simulate some data. Below […]
List Comprehensions in Python
List comprehensions are a topic a lot of new Python users struggle with. This article seeks to explain the benefits of list comprehensions and how list comprehensions work in a digestible manner. Single for loop list comprehension The following code uses a traditional for loop to change each string in a for loop from upper […]
Getting Started with the Kruskal-Wallis Test
What is it? One of the most well-known statistical tests to analyze the differences between means of given groups is the ANOVA (analysis of variance) test. While ANOVA is a great tool, it assumes that the data in question follows a normal distribution. What if your data doesn’t follow a normal distribution or if your […]
A Beginner’s Guide to Marginal Effects
What are average marginal effects? (If you’re reading this, chances are you just asked this question.) If we unpack the phrase, it looks like we have effects that are marginal to something, all of which we average. So let’s look at each piece of this phrase and see if we can help you get a […]
The Intuition Behind Confidence Intervals
Say it with me: An X% confidence interval captures the population parameter in X% of repeated samples. In the course of our statistical educations, many of us had that line (or some variant of it) crammed, wedged, stuffed, and shoved into our skulls until definitional precision was leaking out of noses and pooling on our […]
Power and Sample Size Analysis using Simulation
The power of a test is the probability of correctly rejecting a null hypothesis. For example, let’s say we suspect a coin is not fair and lands heads 65% of the time. The null hypothesis is the coin is not biased to land heads. The alternative hypothesis is the coin is biased to land heads. […]
Post Hoc Power Calculations are Not Useful
It is well documented that post hoc power calculations are not useful (Goodman and Berlin 1994, Hoenig and Heisey 2001, Althouse 2020). Also known as observed power or retrospective power, post hoc power purports to estimate the power of a test given an observed effect size. The idea is to show that a “non-significant” hypothesis […]
Understanding Ordered Factors in a Linear Model
Consider the following data from the text Design and Analysis of Experiments, 7 ed (Montgomery, Table 3.1). It has two variables: power and rate. Power is a discrete setting on a tool used to etch circuits into a silicon wafer. There are four levels to choose from. Rate is the distance etched measured in Angstroms […]