“Web scraping” or “data scraping” is simply the process of extracting data from a website. This can, of course, be done manually: you could go to a website, find the relevant data or information, and enter that information into some data file that you have stored locally. But imagine that you want to pull a […]

## A Brief on Brier Scores

Not all predictions are created equal, even if, in categorical terms, the predictions suggest the same outcome: “X will (or won’t) happen.” Say that I estimate that there’s a 60% chance that 100 million COVID-19 vaccines will be administered in the US during the first 100 days of Biden’s presidency, but my friend estimates that […]

## Getting Started with pandas in Python

The pandas package is an open-source software library written for data analysis in Python. Pandas allows users to import data from various file formats (comma-separated values, JSON, SQL, fits, etc.) and perform data manipulation operations, including cleaning and reshaping the data, summarizing observations, grouping data, and merging multiple datasets. In this article, we’ll explore briefly […]

## Understanding Multiple Comparisons and Simultaneous Inference

When it comes to confidence intervals and hypothesis testing there are two important limitations to keep in mind. The significance level1, \(\alpha\), or the confidence interval coverage, \(1 – \alpha\), only apply to one test or estimate, not to a series of tests or estimates. are only appropriate if the estimate or test was not […]

## Data Scientist as Cartographer: An Introduction to Making Interactive Maps in R with Leaflet

Note: This version of the article contains static images of maps generated with Leaflet. To view a version with interactive maps, click here. A striking feature of many maps from early in the history of cartography is their linearity. Being primarily for travel (and given the technological limitations on how faithfully geographies could be understood […]

## Understanding Robust Standard Errors

What are robust standard errors? How do we calculate them? Why use them? Why not use them all the time if they’re so robust? Those are the kinds of questions this post intends to address. To begin, let’s start with the relatively easy part: getting robust standard errors for basic linear models in Stata and […]

## Getting Started with Multinomial Logit Models

Multinomial logit models allow us to model membership in a group based on known variables. For example, operating system preference of a university’s students could be classified as “Windows”, “Mac”, or “Linux”. Perhaps we would like to better understand why students choose one OS versus another. We might want to build a statistical model that […]

## Understanding Empirical Cumulative Distribution Functions

What are empirical cumulative distribution functions and what can we do with them? To answer the first question, let’s first step back and make sure we understand “distributions”, or more specifically, “probability distributions”. A Basic Probability Distribution Imagine a simple event, say flipping a coin 3 times. Here are all the possible outcomes, where H […]

## Getting Started with Rate Models

Let’s say we’re interested in modeling the number of auto accidents that occur at various intersections within a city. Upon collecting data after a certain period of time perhaps we notice two intersections have the same number of accidents, say 25. Is it correct to conclude these two intersections are similar in their propensity for […]

## Getting Started with Regular Expressions

Regular expressions (or regex) are tools for matching patterns in character strings. These can be useful for finding words or letter patterns in text, parsing filenames for specific information, and interpreting input formatted in a variety of ways (e.g., phone numbers). The syntax of regular expressions is generally recognized across operating systems and programming languages. […]