Mixed Models: Models

This article is part of the Mixed Model series. For a list of topics covered by this series, see the Introduction article. If you're new to mixed models we highly recommend reading the articles in order.

Overview

This article provides an introduction to mixed models, models which include both random effects and fixed effects. The article provides a high level overview of the theoretical basis for mixed models. The difference between fixed and mixed models is also covered. The article ends with how to specify random terms in lmer() and glmer() and the results from these functions.

Preliminaries

You will get the most from this article if you follow along with the examples in RStudio. Working the exercises will further enhance your skills with the material. The following steps will prepare your R session to run the examples included in this article.

  • Start RStudio.

  • Open a new script and save it as MixMod.

  • The examples used in this article require several packages to be loaded. If the lme4, MASS, car, pbkrtest, RLRsim, ggplot2, and pbnm packages are not in your list of packages in Rstudio's Packages tab, Install the missing packages.

    Note, instructions for loading the pbnm package can be found at Loading pbnm.

  • Enter the following commands in your MixMod script and run them.

    #####################################################
    #####################################################
    ##
    ##   Random Effect examples from UW Madison SSCC
    ##   
    ##   These example demonstrate some commonly used
    ##   functions for modeling random effects using
    ##   lme4.
    ## 
    #####################################################
    #####################################################
    
    library(lme4)
    library(MASS)
    library(car)
    library(ggplot2)
    library(pbkrtest)
    library(RLRsim)
    library(pbnm)
    
    data(pbDat)
  • There are no console results of interest from these commands.

Effects

An effect is a difference in a measure which is associated with an event. The coefficients of a regression model are events associated with either belonging to a group (categorical variable) or a unit change of a measure (continuous variables). Effects associated with continuous variables (typically a linear relationship) are commonly called slopes and represent variable changes in the response. The reference group of a categorical variable is called an intercept. The coefficients associated with all other groups of a categorical variable represent a change in the intercept, a single discrete change in the response.

There are a number of definitions of fixed and random effects in common use. Each of the definitions has its advantages and disadvantages. The definitions provided in this article series were chosen to focus on the interpretations which can be made based on the decision to model a variable as fixed or random. This choice of definitions is not intended to reject or diminish any of the other common definitions.

The following example highlights the different inferences associated with the choice of modelling a variable as random versus fixed. A study was done on student success rate at a university. Data was collected from students in twenty programs. If we are interested in just these twenty programs, fixed effects would be used. If we are interested in all programs at the university and these twenty are a representative sample, random effects would be used. In the first model we would be able to make inferences only about the success rate of the twenty programs and differences between these twenty programs. We could make no inferences beyond these twenty programs. In the second model we would make inferences about the success rate of the population of programs at the university. We could also make inferences about variance of the population of programs at the university.

When a categorical variable contains all possible levels of interest, the effects for these levels are called fixed effects. When a categorical variable contains a sample of all possible levels of interest, the effects are random effects. These sampled levels are considered independent samples of an "unobserved variable". The variable is called unobserved because we do not have a complete view of it's distribution. While a mean and variance will be estimated, information about its shape/form will not be determined. The term "unobserved" is used becuse there are other levels of interest which were not sampled. These are not missing values.

The decision to model a categorical variable as a set of fixed events or as a sample of possible events of some unobserved random variable determines what interpretations can be made from the model. If fixed effects are used, inferences can be made about the specific levels of the categorical variable as well as differences between levels. If random effects are used, inferences are typically made about the population. We recommend that the choice of modelling a variable as random or fixed be driven by the hypotheses being tested. While constructing models, it may be determined that a random effect needs to be changed to a fixed effect (likely due to limited data.) This change quite possibly will cause a change in the hypothesis being tested.

The levels of a random effect are assumed to be independent of each other and representative of the population they are sampled from. These assumptions can not be checked from the model and the modelling decision is made based on information about how the data set was created.

A model with random effects and no specified fixed effects will still contain an intercept. As such all models with random effects also contain at least one fixed effect. Therefore, a model is either a fixed effect model (contains no random effects) or it is a mixed effect model (contains both fixed and random effects). Mixed effects models are often referred to as mixed models.

Mixed effect models

Ordinary least squares models fit the unconditional response, \(\boldsymbol{Y}\), with the assumption of normally distributed errors. The response is the mean associated with a single value for each of the independent variables. The set of independent variables and their observed values are denoted as \(\boldsymbol{X}\) and the estimated effects of these variables are denoted as \(\boldsymbol{\beta}\). These models can be expressed as

\[Y \ \ \sim \ \ N(\boldsymbol{X\beta},\sigma^2 \boldsymbol{I}).\]

A linear mixed model includes at least one unobserved variable. The unobserved variable is modelled in both the fixed and random parts of a mixed model. The mean of an unobserved variable is included in the estimates of the fixed portion of the model (\(\boldsymbol{\beta}\).) The variation about the mean in the unobserved variable is captured in a vector of random variables denoted as \(\boldsymbol{B}\). Since the mean has been removed, \(\boldsymbol{B}\) has a mean of 0. The sampled levels of \(\boldsymbol{B}\) are denoted as \(\boldsymbol{b}\). Thus, \(\boldsymbol{b}\) is the observed differences from the mean of the unobserved variable. The fitted response variable, \(Y\), of a mixed model is conditional on the variable portion of sampled effects (\(\boldsymbol{b}\).) This model can be written as

\[Y \, | \, \boldsymbol{B}=\boldsymbol{b} \ \ \sim \ \ N(\boldsymbol{X\beta} + \boldsymbol{Z b}, \sigma^2 \boldsymbol{I})\]

\[\boldsymbol{B} \ \ \sim \ \ N(\boldsymbol{0}, \boldsymbol{\Sigma_{\theta}})\]

where \(\boldsymbol{\theta}\) is the variances and possibly covariances of the distributions of the unobserved random variables, \(\boldsymbol{\Sigma_{\theta}}\) is the variance covariance matrix of \(\boldsymbol{B}\), and \(\boldsymbol{Z}\) is the variables representing the sampled effects. The parameters of this model are \(\boldsymbol{\beta}\) and \(\boldsymbol{\theta}\). The random effects, \(\boldsymbol{b}\), and the covariance matrix, \(\boldsymbol{\Sigma_{\theta}}\), are calculated as a result of estimating \(\boldsymbol{\theta}\). As such, \(\boldsymbol{b}\) and \(\boldsymbol{\Sigma_{\theta}}\) are not formal parameters of the model.

The random effects, the individual levels of \(\boldsymbol{b}\), are assumed to be normally distributed for linear mixed models. For generalized mixed models the random effects are assumed to have a normal distribution on the link scale, which results in non normal distributions on the response scale when the link function is non linear, such as with the logit function. There are text and papers which assume the unobserved random variable is non normally distributed. The lme4 documentation specifies an assumption of normality in the unobserved variable.

Each variance, or covariance, in the \(\boldsymbol{\theta}\) vector is associated with an intercept or slope in the fixed portion of the model, (\(\boldsymbol{\beta}\).) The intercepts and slopes which are associated with a random \(\boldsymbol{\theta}\) are interpreted as the mean effect of a population. This may be in addition to an intercept being interpreted as a reference level for other categorical variables in the fixed portion of the model.

The predicted \(\boldsymbol{Y}\)s from this model are constants only when they are conditioned on known levels of \(\boldsymbol{B}\). The unconditional \(\boldsymbol{Y}\) is a vector of random variables. We have an estimate of the means (from \(\boldsymbol{X \beta}\)) and variances (from \(\boldsymbol{\theta}\).)

Consider the following example which has a single unobserved variable and a single continuous variable. The unobserved variable in this example is sampled twenty times. This means that the variable takes on 20 different values in the data set. This variable could take on many more values, possible infinitely many if the unobserved variable is continuous. But, we only have information on the twenty sampled levels of the unobserved variable.

\[\widehat{Y} \, | \, (\boldsymbol{B}=\boldsymbol{b}) \ \ = \ \ \boldsymbol{\widehat{\beta}}_0 + \boldsymbol{x \widehat{\beta}}_1 + \boldsymbol{Z b}\]

  • \(\theta\) is a single value. It is the variance of the single unobserved variable.

  • \(\boldsymbol{\Sigma_{\theta}}\) is a 20 by 20 matrix. There is a row and column for each of the twenty sampled values. The diagonal elements of the matrix are the value of \(\theta\) since each of the sampled values are from the same unobserved random variable. All non diagonal elements are 0 since the sampled values are assumed to be independent.

  • \(\boldsymbol{b}\) is the twenty random effects calculated based on the estimated \(\theta\).

  • \(\boldsymbol{Z}\) is an \(n\) by 20 matrix with the columns being the indicator variables for which sample each observation is from.

  • \(\boldsymbol{\beta}_0\) is the mean value of the unobserved variable at the value of \(\boldsymbol{x}\) equals zero.

  • \(\boldsymbol{\beta}_1\) is the change in the mean value of the response variable associated with an increase of one unit in the \(\boldsymbol{x}\).

Random intercepts and random slopes

A random intercept is an intercept which has a variance from the random component of the model associated with it. A random slope similarly is a slope which has a variance associated with it. Both of these are an estimate which is expected to vary over some population which is represented by a sample in the model.

When a random slope and random intercept are associated with a single population (grouping variable) one needs to consider if the variance of the intercept and the variance of the slope are correlated. This will be demonstrated in the random slope example.

Mixed model formula specification in R

Mixed models formulas are an extension of R formulas. An introduction to R formulas and specifying fixed effects are covered in the R For Researchers: Regression (OLS) article.

An unobserved variable is specified in two parts. The first part identifies the intercepts and slopes which are to be modelled as random. This is done using an R formula expression. A random intercept or slope will be modelled for each term provided in the R formula expression.

The second part of the specification identifies the sampled levels. The sampled levels are given by a categorical variable, or a formula expression which evaluates to a categorical variable. The categorical variable given in the random effect specification is the groups identifier for the random effects.

These two parts are placed inside parenthesis, (), and the two parts are separated by the bar, "|". The syntax for including a random effect in a formula is shown below.

+ ( effect expression | groups )

The following are a few examples of specifying random effects. Each example provides the R formula, a description of the model parameters, and the mean and variance of the true model which is estimated by the regression and observed values.

  • The examples will use the following variables.

    • A: factor with 15 levels
    • B: factor with 25 levels
    • C: numeric
    • Y: numeric

    The following examples will use \(\boldsymbol{B}\) to represent a vector which contains all of the unobserved random variables of the model and \(\boldsymbol{b}\) to represent a particular instance of \(\boldsymbol{B}\). The model variable B represents something different from \(\boldsymbol{B}\) and \(\boldsymbol{b}\). The model variable B identifies a set of groups. The groups of B can be modeled as fixed effects or as a sample from a population. If B is modeled as a sample, then there will be one or more random variables associated with the population and these random variables will be members of the vector \(\boldsymbol{B}\). The variable A could also be modeled as a sample, in which case the random variables assoiated with it will also be members of \(\boldsymbol{b}\).

  • Y ~ C + (1|B) results in the following model parameters
    • (intercept) mean intercept associated with the groups of B at C = 0, \(\beta_0\)
    • slope effect associated with C, \(\beta_1\)
    • variance of intercept associated with the groups of B, \(\theta\)
    • variance of residuals, \(\sigma^2\)

    Y\(_i\) \(| (\boldsymbol{B} = \boldsymbol{b}_{j[i]})\) will have a mean of \(\beta_0 + \beta_1 c_i + b_{j[i]}\) with variance of \(\sigma^2\), where \(c_i\) is the value of C from the \(i\)th observation, \(j[i]\) is the group of B that observation \(i\) is a member of, and \(b_{j[i]}\) is the effect associated with the \(j\)th sample of the model variable B. The model variable B is our sample of the unobserved variable \(\boldsymbol{B}\).

    Y\(_i\) will have a mean of \(\beta_0 + \beta_1\)c\(_i\) with variance of \(\theta + \sigma^2\)

  • Y ~ C + A + (1|B) results in the following model parameters
    • (intercept) (mean intercept associated with the groups of B at the reference level of variable A and C = 0), \(\beta_0\)
    • slope effect associated with C, \(\beta_1\)
    • 14 intercept fixed effects associated with the non-reference levels of A, \(\beta_2, \dots, \beta_{14}\)
    • variance of intercept associated with the groups of B, \(\theta\)
    • variance of residuals, \(\sigma^2\)

    Y\(_i\) \(| (\boldsymbol{B} = \boldsymbol{b}_{j[i]})\) will have a mean of \(\beta_0 + \beta_1 c_i + \sum_{k=2}^{15} \beta_k a_{k[i]} + b_{j[i]}\) with variance of \(\sigma^2\), where \(c_i\) is the value of C from the \(i\)th observation, \(k[i]\) is the \(k\)th group of A associated with the \(i\)th observation, \(a_{k[i]}\) is an indicator which is true if the \(i\)th observation belongs to the \(k\)th group of A, \(j[i]\) is the group of B that observation \(i\) is a member of, and \(b_{j[i]}\) is the effect associated with the \(j\)th sample of the model variable B. The model variable B is our sample of the unobserved varaible \(\boldsymbol{B}\).

    Y\(_i\) will have a mean of \(\beta_0 + \beta_1 c_i + \sum_{j=2}^{14} \beta_j A_j\) with variance of \(\theta + \sigma^2\)

  • Y ~ C + (1|A) + (0+C|A) results in the following model parameters
    • (intercept) (mean intercept associated with the groups of A at C = 0), \(\beta_0\)
    • slope effect associated with C (mean slope associated with population A), \(\beta_1\)
    • variance of intercept associated with A, \(\theta_1\)
    • variance of slope of C associated with A, \(\theta_2\)
    • variance of residuals, \(\sigma^2\)

    Y\(_i\) \(| (\boldsymbol{B} = \boldsymbol{b}_{j[i]})\) will have a mean of \(\beta_0 + \beta_1 c_i + b_{1,j[i]} + b_{2,j[i]}c_i\) with variance of \(\sigma^2\), where \(c_i\) is the value of C from the \(i\)th observation, \(j[i]\) is the group of model variable A that observation \(i\) is a member of, \(b_{1,j[i]}\) is the intercept effect associated with the \(j\)th sample of the model variable A, \(b_{2,j[i]}\) is the slope effect associated with the \(j\)th sample of the model variable A. There are two unobserved variables associated with with the population sampled by A. \(\boldsymbol{B}\) is the random vector \([ \boldsymbol{b}_1 \ \boldsymbol{b}_2]\).

    Y\(_i\) will have a mean of \(\beta_0 + \beta_1\)c\(_i\) with variance of \(\theta_1 + \theta_2 + \sigma^2\)

Exercises

Use the variables defied above for these exercises.

  1. Write the R formula for a model with C as a fixed effect and two unobserved random variables. The random variables are random intercepts. One of the random variables is sampled as A, the other is sampled as B.

  2. What is the mean and variance for both the unconditional y and the conditional y from problem 1?

  3. Write the R formula for a model with a random slope for C and a correlated random intercept. The random slope is sampled as A.

  4. What is the mean and variance for both the unconditional y and the conditional y from problem 3?

Solutions

lmer() and glmer()

The lmer() (pronounced el-mer) and glmer() functions are used in the examples of this article. The lmer() function is for linear mixed models and the glmer() function is for generalized mixed models.

  • Syntax and use of the lmer() and glmer() functions

    lmer(formula, REML=logical, data=dataFrame)

    glmer(formula, family=familyName, data=dataFrame)

    Returns a model object of class merMod. The merMod object is a list of objects which result from fitting the model.

    The formula parameter is an R formula.

    REML defaults to TRUE. Setting REML to FALSE results in the model being fit by ML. This parameter is only available for lmer(). There is no agreed upon definition for the REML condition in generalized models.

    For generalized mixed models the familyName sets the link and variance function for the model. We will use the binomial family in this article. See R For Researchers: Regression (GLM) for further information on generalized model specification in R.

    Data is an optional parameter. dataFrame specifies the data.frame which contains the variables to be fit. R will look in the current environment for variables which are not found in the data.frame.

We will demonstrate a binomial generalized mixed model (glmer) with a single random intercept model. This model uses the response variable bin from the pbDat data set.

  • Enter the following commands in your script and run them.

    gmm <- glmer(bin~x1 + x2 + (1|g1), family=binomial, data=pbDat)
    summary(gmm)
  • The results of the above commands are shown below.

    Generalized linear mixed model fit by maximum likelihood (Laplace
      Approximation) [glmerMod]
     Family: binomial  ( logit )
    Formula: bin ~ x1 + x2 + (1 | g1)
       Data: pbDat
    
         AIC      BIC   logLik deviance df.resid 
       113.0    123.4    -52.5    105.0       96 
    
    Scaled residuals: 
        Min      1Q  Median      3Q     Max 
    -2.3497 -0.4628  0.1073  0.4802  1.8883 
    
    Random effects:
     Groups Name        Variance Std.Dev.
     g1     (Intercept) 4.255    2.063   
    Number of obs: 100, groups:  g1, 12
    
    Fixed effects:
                Estimate Std. Error z value Pr(>|z|)    
    (Intercept)   0.1651     0.6618   0.249 0.803024    
    x1            0.4996     0.2983   1.675 0.093919 .  
    x2           -1.5155     0.3949  -3.837 0.000124 ***
    ---
    Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
    Correlation of Fixed Effects:
       (Intr) x1    
    x1  0.097       
    x2 -0.029 -0.192

The glmer() summary starts with a description of the fitted model. This section indicates if the model was fit by REML or MLE and shows the R formula which generated the model.

The model fit criterion are displayed next. Our model was fit with the ML and not REML and it has a Log likelihood value of -52.5 and deviance of 105. This section will contain different summary stats of the model depending on the model form and REML status.

A summary of the scaled residuals is displayed next.

The random effects summary is displayed next. For each group used in a random term the variance and standard deviation is provided for each specified random intercept and slope. These group variances are the model's \(\boldsymbol{\theta}\) parameters. The residual variance and standard deviation are provided after the variances of the groups, if they are estimated. There are some glm models in which the residual scale is fixed and the residual summary is not displayed. The random effects section of summary also provides the number of levels for each the groups.

The fixed effect summary displays the effect value, its standard error, and the t or z value for each coefficient. The t and z statistic are from Wald tests of significance for the variable. Wald tests are not the most efficient test for these models.
See the Testing Significance of Effects article for suggested tests which provide more reliable p-values.

Exercises

Use the sleepstudy data set for the following exercises. This data is loaded using

```
data(sleepstudy)
```
  1. Create a model for Reaction time regressing on Days and accounting for the random effect of Subject.

  2. The median reaction time is 288.7. Create variable for aboveAve. You can use the code provided below. Then create a model regressing on this new dichotomous variable

    ```
    aboveAve <- ifelse(sleepstudy$Reaction > 288.7, 1, 0)
    ```

    Solutions

Previous: Introduction

Next: Testing Significance of Effects

Last Revised: 4/1/2016