Ling 204: session 05

Author
Affiliations

UC Santa Barbara

JLU Giessen

Published

04 Feb 2026 11-34-56

1 Session 05: Mixed-effects models 1

In this session, we will look at one or two data sets in a language acquisition paper published in Cognition in (2013), namely Pine et al. (2013). The data sets are not really ideal examples for the application of mixed-effects modeling, but they are extremely small (so they are easy to look at comprehensively and fast to run) and they have a few characteristics that make them interesting didactically. Still, bear in mind that they are not the best examples and that the analysis we will proceed with here is quite some overkill, given the size and complexity of this data set.

rm(list=ls(all.names=TRUE))
set.seed(sum(utf8ToInt("Gummibaerchen")))
pkgs2load <- c("dplyr", "car", "effects", "emmeans", "lme4", "magrittr", "multcomp", "MuMIn")
suppressMessages(sapply(pkgs2load, library, character.only=TRUE, logical.return=TRUE, quietly=TRUE))
rm(pkgs2load)
   dplyr      car  effects  emmeans     lme4 magrittr multcomp    MuMIn
    TRUE     TRUE     TRUE     TRUE     TRUE     TRUE     TRUE     TRUE 

The first analysis will be based on the data in their Table 1, which we load from _input/pine-etal-table1.csv, and you can find information about its contents in _input/pine-etal-table1.r:

summary(d <- read.delim(
   file="_input/pine-etal-table1.csv",
   stringsAsFactors=TRUE))
      CASE          OVERLAP            NAME       PHASE     USED
 Min.   : 1.00   Min.   :0.2600   Anne   : 6   Phase1:24   no :36
 1st Qu.:18.75   1st Qu.:0.3475   Aran   : 6   Phase2:24   yes:36
 Median :36.50   Median :0.4850   Becky  : 6   Phase3:24
 Mean   :36.50   Mean   :0.5158   Carl   : 6
 3rd Qu.:54.25   3rd Qu.:0.6625   Dominic: 6
 Max.   :72.00   Max.   :1.0000   Gail   : 6
                                  (Other):36                       

1.1 Exploration & preparation

As always, it is indispensable to do a decent amount of exploratory plotting before any statistical analysis is begun. If you do mixed-effects modeling, you really must look at

  • as before, every variable in isolation (the response, predictors, controls, random effects, everything);
  • as before, every rhs variable (fixed and random effects) with the response;
  • as before, every combination of rhs variables that will be considered in the model on their own and with the response;
  • but now also combinations of fixed-effects predictors and random effects.

1.1.1 The response

Since we want to stay as close to their data as possible, we follow them in applying the arcsine transformation to the response variable:

d$OVERLAP.asin <- asin(d$OVERLAP)
summary(d <- d[,c(1,3,6,4,5,2)])
      CASE            NAME     OVERLAP.asin       PHASE     USED
 Min.   : 1.00   Anne   : 6   Min.   :0.2630   Phase1:24   no :36
 1st Qu.:18.75   Aran   : 6   1st Qu.:0.3549   Phase2:24   yes:36
 Median :36.50   Becky  : 6   Median :0.5071   Phase3:24
 Mean   :36.50   Carl   : 6   Mean   :0.5610
 3rd Qu.:54.25   Dominic: 6   3rd Qu.:0.7242
 Max.   :72.00   Gail   : 6   Max.   :1.5708
                 (Other):36
    OVERLAP
 Min.   :0.2600
 1st Qu.:0.3475
 Median :0.4850
 Mean   :0.5158
 3rd Qu.:0.6625
 Max.   :1.0000
                 

What does the response and its transformed version look like?

par(mfrow=c(1,3))
hist(d$OVERLAP     , breaks="FD")
hist(d$OVERLAP.asin, breaks="FD")
plot(d$OVERLAP, d$OVERLAP.asin); grid(); abline(0, 1)
par(mfrow=c(1,1))
Figure 1: Descriptive exploration of the response

Not really clear to me that this transformation does so much good here …

1.1.2 The fixed-effects structure

Here’s the first predictor, PHASE:

table(d$PHASE) # frequencies of levels

Phase1 Phase2 Phase3
    24     24     24 
tapply(d$OVERLAP.asin, d$PHASE, mean) # means of response per level
   Phase1    Phase2    Phase3
0.6189580 0.5492471 0.5148980 
boxplot(d$OVERLAP.asin ~ d$PHASE, notch=TRUE, var.width=TRUE); grid()
Figure 2: Exploration of the response as a function of PHASE

Doesn’t look like there will be much of an effect, but maybe a bit.

Here’s the second predictor, USED:

table(d$USED) # frequencies of levels

 no yes
 36  36 
tapply(d$OVERLAP.asin, d$USED, mean) # means of response per level
       no       yes
0.3572094 0.7648593 
boxplot(d$OVERLAP.asin ~ d$USED, notch=TRUE, var.width=TRUE); grid()
Figure 3: Exploration of the response as a function of USED

At least from what we see now, this should be highly significant.

Now the combination of predictors, which looks like there will be an interaction between PHASE and USED:

table(d$PHASE, d$USED) # frequencies of combinations of levels

         no yes
  Phase1 12  12
  Phase2 12  12
  Phase3 12  12
tapply(d$OVERLAP.asin, list(d$PHASE, d$USED), mean) # means of ...
              no       yes
Phase1 0.3866357 0.8512803
Phase2 0.3500527 0.7484416
Phase3 0.3349400 0.6948561

1.1.3 The random-effects structure

Every kid contributes the same (small) number of data points:

table(d$NAME)

   Anne    Aran   Becky    Carl Dominic    Gail    Joel    John     Liz  Nicole
      6       6       6       6       6       6       6       6       6       6
   Ruth  Warren
      6       6 

Every kid is recorded in each phase (& even equally often), which means we need varying slopes of PHASE|NAME (and maybe the interaction w/ USED):

table(d$NAME, d$PHASE)

          Phase1 Phase2 Phase3
  Anne         2      2      2
  Aran         2      2      2
  Becky        2      2      2
  Carl         2      2      2
  Dominic      2      2      2
  Gail         2      2      2
  Joel         2      2      2
  John         2      2      2
  Liz          2      2      2
  Nicole       2      2      2
  Ruth         2      2      2
  Warren       2      2      2

Every kid is recorded in each condition of USED (& even equally often), which means we need varying slopes of USED|NAME (and maybe the interaction w/ PHASE):

table(d$NAME, d$USED)

          no yes
  Anne     3   3
  Aran     3   3
  Becky    3   3
  Carl     3   3
  Dominic  3   3
  Gail     3   3
  Joel     3   3
  John     3   3
  Liz      3   3
  Nicole   3   3
  Ruth     3   3
  Warren   3   3

Every kid is recorded once in each combo of PHASE & USED (!):

ftable(d$NAME, d$USED, d$PHASE)
             Phase1 Phase2 Phase3

Anne    no        1      1      1
        yes       1      1      1
Aran    no        1      1      1
        yes       1      1      1
Becky   no        1      1      1
        yes       1      1      1
Carl    no        1      1      1
        yes       1      1      1
Dominic no        1      1      1
        yes       1      1      1
Gail    no        1      1      1
        yes       1      1      1
Joel    no        1      1      1
        yes       1      1      1
John    no        1      1      1
        yes       1      1      1
Liz     no        1      1      1
        yes       1      1      1
Nicole  no        1      1      1
        yes       1      1      1
Ruth    no        1      1      1
        yes       1      1      1
Warren  no        1      1      1
        yes       1      1      1

1.2 Wrong & traditional analyses

One might be tempted to fit this linear model, but …

summary(lm(OVERLAP ~ NAME*PHASE*USED, data=d))

Call:
lm(formula = OVERLAP ~ NAME * PHASE * USED, data = d)

Residuals:
ALL 72 residuals are 0: no residual degrees of freedom!

Coefficients:
                                  Estimate Std. Error t value Pr(>|t|)
(Intercept)                      4.400e-01        NaN     NaN      NaN
NAMEAran                         2.064e-16        NaN     NaN      NaN
NAMEBecky                       -7.000e-02        NaN     NaN      NaN
NAMECarl                        -1.000e-01        NaN     NaN      NaN
NAMEDominic                     -8.000e-02        NaN     NaN      NaN
NAMEGail                        -7.000e-02        NaN     NaN      NaN
NAMEJoel                        -1.200e-01        NaN     NaN      NaN
NAMEJohn                        -9.000e-02        NaN     NaN      NaN
NAMELiz                         -1.100e-01        NaN     NaN      NaN
NAMENicole                      -9.000e-02        NaN     NaN      NaN
NAMERuth                        -3.123e-17        NaN     NaN      NaN
NAMEWarren                      -3.000e-02        NaN     NaN      NaN
PHASEPhase2                     -6.000e-02        NaN     NaN      NaN
PHASEPhase3                     -4.000e-02        NaN     NaN      NaN
USEDyes                          2.900e-01        NaN     NaN      NaN
NAMEAran:PHASEPhase2             3.000e-02        NaN     NaN      NaN
NAMEBecky:PHASEPhase2           -2.000e-02        NaN     NaN      NaN
NAMECarl:PHASEPhase2             6.000e-02        NaN     NaN      NaN
NAMEDominic:PHASEPhase2          4.000e-02        NaN     NaN      NaN
NAMEGail:PHASEPhase2             3.000e-02        NaN     NaN      NaN
NAMEJoel:PHASEPhase2             1.000e-02        NaN     NaN      NaN
NAMEJohn:PHASEPhase2             9.000e-02        NaN     NaN      NaN
NAMELiz:PHASEPhase2             -1.000e-02        NaN     NaN      NaN
NAMENicole:PHASEPhase2           3.000e-02        NaN     NaN      NaN
NAMERuth:PHASEPhase2             3.000e-02        NaN     NaN      NaN
NAMEWarren:PHASEPhase2           2.000e-02        NaN     NaN      NaN
NAMEAran:PHASEPhase3             1.000e-02        NaN     NaN      NaN
NAMEBecky:PHASEPhase3           -2.000e-02        NaN     NaN      NaN
NAMECarl:PHASEPhase3            -2.000e-02        NaN     NaN      NaN
NAMEDominic:PHASEPhase3         -1.548e-16        NaN     NaN      NaN
NAMEGail:PHASEPhase3            -1.000e-02        NaN     NaN      NaN
NAMEJoel:PHASEPhase3            -1.000e-02        NaN     NaN      NaN
NAMEJohn:PHASEPhase3             4.000e-02        NaN     NaN      NaN
NAMELiz:PHASEPhase3             -2.000e-02        NaN     NaN      NaN
NAMENicole:PHASEPhase3          -7.612e-17        NaN     NaN      NaN
NAMERuth:PHASEPhase3            -3.000e-02        NaN     NaN      NaN
NAMEWarren:PHASEPhase3          -4.000e-02        NaN     NaN      NaN
NAMEAran:USEDyes                 2.000e-02        NaN     NaN      NaN
NAMEBecky:USEDyes               -2.000e-02        NaN     NaN      NaN
NAMECarl:USEDyes                 1.200e-01        NaN     NaN      NaN
NAMEDominic:USEDyes              1.400e-01        NaN     NaN      NaN
NAMEGail:USEDyes                -5.000e-02        NaN     NaN      NaN
NAMEJoel:USEDyes                 3.000e-02        NaN     NaN      NaN
NAMEJohn:USEDyes                 1.100e-01        NaN     NaN      NaN
NAMELiz:USEDyes                  8.000e-02        NaN     NaN      NaN
NAMENicole:USEDyes               6.000e-02        NaN     NaN      NaN
NAMERuth:USEDyes                 2.700e-01        NaN     NaN      NaN
NAMEWarren:USEDyes              -5.550e-16        NaN     NaN      NaN
PHASEPhase2:USEDyes             -5.621e-16        NaN     NaN      NaN
PHASEPhase3:USEDyes             -4.000e-02        NaN     NaN      NaN
NAMEAran:PHASEPhase2:USEDyes     1.000e-02        NaN     NaN      NaN
NAMEBecky:PHASEPhase2:USEDyes    8.000e-02        NaN     NaN      NaN
NAMECarl:PHASEPhase2:USEDyes    -1.000e-01        NaN     NaN      NaN
NAMEDominic:PHASEPhase2:USEDyes -3.000e-02        NaN     NaN      NaN
NAMEGail:PHASEPhase2:USEDyes     4.000e-02        NaN     NaN      NaN
NAMEJoel:PHASEPhase2:USEDyes     3.000e-02        NaN     NaN      NaN
NAMEJohn:PHASEPhase2:USEDyes    -1.100e-01        NaN     NaN      NaN
NAMELiz:PHASEPhase2:USEDyes      6.223e-16        NaN     NaN      NaN
NAMENicole:PHASEPhase2:USEDyes   3.000e-02        NaN     NaN      NaN
NAMERuth:PHASEPhase2:USEDyes    -1.600e-01        NaN     NaN      NaN
NAMEWarren:PHASEPhase2:USEDyes   5.895e-16        NaN     NaN      NaN
NAMEAran:PHASEPhase3:USEDyes     3.000e-02        NaN     NaN      NaN
NAMEBecky:PHASEPhase3:USEDyes    8.000e-02        NaN     NaN      NaN
NAMECarl:PHASEPhase3:USEDyes     1.000e-02        NaN     NaN      NaN
NAMEDominic:PHASEPhase3:USEDyes -7.000e-02        NaN     NaN      NaN
NAMEGail:PHASEPhase3:USEDyes     1.200e-01        NaN     NaN      NaN
NAMEJoel:PHASEPhase3:USEDyes    -2.000e-02        NaN     NaN      NaN
NAMEJohn:PHASEPhase3:USEDyes    -7.000e-02        NaN     NaN      NaN
NAMELiz:PHASEPhase3:USEDyes     -2.000e-02        NaN     NaN      NaN
NAMENicole:PHASEPhase3:USEDyes   1.000e-02        NaN     NaN      NaN
NAMERuth:PHASEPhase3:USEDyes    -2.000e-01        NaN     NaN      NaN
NAMEWarren:PHASEPhase3:USEDyes   1.000e-01        NaN     NaN      NaN

Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared:      1, Adjusted R-squared:    NaN
F-statistic:   NaN on 71 and 0 DF,  p-value: NA

that would be wrong, because? Because

  • NAME is not a fixed effect but a random effect;
  • this model doesn’t account for the repeated measurements;
  • this model cannot be fit because the highest-level interaction has only 1 case per combination.

Here’s what Pine et al. did, a repeated-measures ANOVA with the 2 predictors PHASE and USED nested into NAME:

summary(aov(OVERLAP.asin ~ PHASE*USED + Error(NAME/(PHASE*USED)), data=d))

Error: NAME
          Df Sum Sq Mean Sq F value Pr(>F)
Residuals 11 0.3788 0.03444

Error: NAME:PHASE
          Df Sum Sq Mean Sq F value  Pr(>F)
PHASE      2 0.1349 0.06747   9.689 0.00096 ***
Residuals 22 0.1532 0.00696
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Error: NAME:USED
          Df Sum Sq Mean Sq F value   Pr(>F)
USED       1 2.9912   2.991   229.8 1.02e-08 ***
Residuals 11 0.1432   0.013
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Error: NAME:PHASE:USED
           Df  Sum Sq  Mean Sq F value Pr(>F)
PHASE:USED  2 0.03368 0.016838   2.433  0.111
Residuals  22 0.15228 0.006922               
ez::ezANOVA(data=d, dv=.(OVERLAP.asin), wid=.(NAME), within=.(PHASE, USED))
$ANOVA
      Effect DFn DFd          F            p p<.05        ges
2      PHASE   2  22   9.689467 9.595714e-04     * 0.14021390
3       USED   1  11 229.811287 1.019506e-08     * 0.78331047
4 PHASE:USED   2  22   2.432580 1.110620e-01       0.03910617

$`Mauchly's Test for Sphericity`
      Effect         W            p p<.05
2      PHASE 0.1356494 4.592929e-05     *
4 PHASE:USED 0.2703970 1.445471e-03     *

$`Sphericity Corrections`
      Effect       GGe       p[GG] p[GG]<.05       HFe       p[HF] p[HF]<.05
2      PHASE 0.5363798 0.008315462         * 0.5476404 0.007884936         *
4 PHASE:USED 0.5781674 0.141404686           0.6032314 0.139532978          

In their results, there is

  • a main effect of noun type (= USED);
  • a main effect of developmental stage (= PHASE);
  • an interaction of USED:PHASE.

BTW: note the violation of sphericity.

1.3 Modeling & numerical interpretation

1.3.1 Exploring the random-effects structure

The first model one might fit is nonsense – why?

summary(m_01a <- lmer(
   OVERLAP.asin ~ 1 +        # intercept
      PHASE*USED +           # fixed effects
      (1 + PHASE*USED|NAME), # random effects
   data=d), correlation=FALSE)

So we eliminate the interaction in the ranef structure:

summary(m_01b <- lmer(
   OVERLAP.asin ~ 1 +        # intercept
      PHASE*USED +           # fixed effects
      (1 + PHASE+USED|NAME), # random effects
   data=d), correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
   Data: d

REML criterion at convergence: -131.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.7305 -0.4470 -0.0031  0.4964  3.9377

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.007472 0.08644
          PHASEPhase2 0.005692 0.07544  -0.95
          PHASEPhase3 0.010240 0.10119  -0.93  1.00
          USEDyes     0.007245 0.08512   0.95 -1.00 -1.00
 Residual             0.003765 0.06136
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.03060  12.635
PHASEPhase2         -0.03658    0.03319  -1.102
PHASEPhase3         -0.05170    0.03848  -1.343
USEDyes              0.46464    0.03509  13.242
PHASEPhase2:USEDyes -0.06626    0.03542  -1.870
PHASEPhase3:USEDyes -0.10473    0.03542  -2.956
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

Of course we get a singularity warning … We explore the ranef structure to see what we might end up with:

rePCA(m_01b)$NAME$sdev %>% round(4)
[1] 2.8225 0.4173 0.0000 0.0000

The output suggests we will end up with 1, maybe maximally 2, sources of ranef variation. We have two kinds of random slopes we might eliminate and we cannot see from the ranef output which one comes with the smaller variance/sd. Thus, we try eliminating both (separately). First, we drop the slopes for PHASE:

summary(m_02a <- update(
   m_01b, .~.
   - (1 + PHASE+USED|NAME)   # random effects dropped
   + (1 +       USED|NAME)), # random effects inserted: no slopes for PHASE
   correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 + USED | NAME) + PHASE:USED
   Data: d

REML criterion at convergence: -110.8

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.5878 -0.3644 -0.0408  0.3261  5.6294

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.000975 0.03123
          USEDyes     0.005759 0.07589  1.00
 Residual             0.006313 0.07945
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.02464  15.689
PHASEPhase2         -0.03658    0.03244  -1.128
PHASEPhase3         -0.05170    0.03244  -1.594
USEDyes              0.46464    0.03914  11.871
PHASEPhase2:USEDyes -0.06626    0.04587  -1.444
PHASEPhase3:USEDyes -0.10473    0.04587  -2.283
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

We still get a singularity warning and. Second, we drop the slopes for USED:

summary(m_02b <- update(
   m_01b, .~.
   - (1 + PHASE+USED|NAME)   # random effects dropped
   + (1 + PHASE     |NAME)), # random effects inserted: no slopes for USED
   correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 + PHASE | NAME) + PHASE:USED
   Data: d

REML criterion at convergence: -110.8

Scaled residuals:
    Min      1Q  Median      3Q     Max
-3.1168 -0.3945 -0.0559  0.4102  5.0655

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.014775 0.12155
          PHASEPhase2 0.004687 0.06846  -1.00
          PHASEPhase3 0.007934 0.08907  -1.00  1.00
 Residual             0.006321 0.07951
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.04193   9.221
PHASEPhase2         -0.03658    0.03800  -0.963
PHASEPhase3         -0.05170    0.04141  -1.248
USEDyes              0.46464    0.03246  14.315
PHASEPhase2:USEDyes -0.06626    0.04590  -1.443
PHASEPhase3:USEDyes -0.10473    0.04590  -2.282
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

We still get a singularity warning … But which of the models do we now continue with?

anova(m_01b, m_02a, refit=FALSE) # note the refit=FALSE, we stick with REML
Data: d
Models:
m_02a: OVERLAP.asin ~ PHASE + USED + (1 + USED | NAME) + PHASE:USED
m_01b: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
      npar     AIC     BIC logLik -2*log(L)  Chisq Df Pr(>Chisq)
m_02a   10 -90.819 -68.052 55.409   -110.82
m_01b   17 -97.658 -58.955 65.829   -131.66 20.839  7   0.004016 **
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(m_01b, m_02b, refit=FALSE) # note the refit=FALSE, we stick with REML
Data: d
Models:
m_02b: OVERLAP.asin ~ PHASE + USED + (1 + PHASE | NAME) + PHASE:USED
m_01b: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
      npar     AIC     BIC logLik -2*log(L)  Chisq Df Pr(>Chisq)
m_02b   13 -84.756 -55.160 55.378   -110.76
m_01b   17 -97.658 -58.955 65.829   -131.66 20.902  4  0.0003312 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Great, welcome to the great new world of mixed-effects modeling:

  1. the model summaries say you cannot fit the models because of singularity warnings;
  2. LR-tests say you cannot simplify the models’ ranef structure because the slopes are all significant.

So what do we do? In this didactic context, we do both.

1.3.1.1 Route 1: further simplification

This route prioritizes issue 1, getting models without convergence or other warnings. We begin with m_02a because it had the higher p-value. Since m_02a still had a warning, we try to simplify it again:

summary(m_03_r1 <- update(
   m_02a, .~. 
   - (1 +       USED|NAME)   # random effects dropped
   + (1             |NAME)), # random effects inserted: drop slopes for USED
   correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 | NAME) + PHASE:USED
   Data: d

REML criterion at convergence: -99.3

Scaled residuals:
    Min      1Q  Median      3Q     Max
-1.7457 -0.3317 -0.0505  0.3025  6.3227

Random effects:
 Groups   Name        Variance Std.Dev.
 NAME     (Intercept) 0.004380 0.06618
 Residual             0.008157 0.09032
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.03232  11.962
PHASEPhase2         -0.03658    0.03687  -0.992
PHASEPhase3         -0.05170    0.03687  -1.402
USEDyes              0.46464    0.03687  12.602
PHASEPhase2:USEDyes -0.06626    0.05215  -1.271
PHASEPhase3:USEDyes -0.10473    0.05215  -2.008
anova(m_02a, m_03_r1, refit=FALSE) %>% data.frame # abbreviating output
        npar       AIC       BIC   logLik X.2.log.L.    Chisq Df  Pr..Chisq.
m_03_r1    8 -83.33156 -65.11823 49.66578  -99.33156       NA NA          NA
m_02a     10 -90.81871 -68.05204 55.40935 -110.81871 11.48715  2 0.003203296

The LR-test says we need subject-specific slopes for USED but we drop them anyway because that gets us m_03_r1, which has no convergence warnings anymore. Thus, this first route makes us begin the exploration of the fixef structure with (the ML version of) m_03_r1.

1.3.1.2 Route 2: no further simplification

This route prioritizes issue 2, a model selection strategy that doesn’t eliminate things the LR-test says are significant. Thus, this second route makes us begin the exploration of the fixef structure with (the ML version of) m_01b.

1.3.2 Exploring the fixed-effects structure

1.3.2.1 Route 1 based on m_03_r1

First, we switch to an ML version of m_03_r1:

summary(m_03_r1_ml <- update( # update
   m_03_r1, .~.,              # this to a model w/
   REML=FALSE),               # ML estimation
   correlation=FALSE)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 | NAME) + PHASE:USED
   Data: d

      AIC       BIC    logLik -2*log(L)  df.resid
   -114.9     -96.7      65.4    -130.9        64

Scaled residuals:
    Min      1Q  Median      3Q     Max
-1.8233 -0.3465 -0.0527  0.3159  6.6038

Random effects:
 Groups   Name        Variance Std.Dev.
 NAME     (Intercept) 0.004015 0.06336
 Residual             0.007478 0.08647
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.03095  12.493
PHASEPhase2         -0.03658    0.03530  -1.036
PHASEPhase3         -0.05170    0.03530  -1.464
USEDyes              0.46464    0.03530  13.162
PHASEPhase2:USEDyes -0.06626    0.04993  -1.327
PHASEPhase3:USEDyes -0.10473    0.04993  -2.098

Then, we determine what, if anything, we can drop:

drop1(m_03_r1_ml, test="Chisq") %>% data.frame
           npar       AIC      LRT   Pr.Chi.
<none>       NA -114.8914       NA        NA
PHASE:USED    2 -114.5487 4.342619 0.1140282

The interaction of USED:PHASE is ns so we drop it:

summary(m_04_r1_ml <- update( # update
   m_03_r1_ml, .~.            # this to a model w/
   - USED:PHASE),             # fixef interaction dropped
   correlation=FALSE)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 | NAME)
   Data: d

      AIC       BIC    logLik -2*log(L)  df.resid
   -114.5    -100.9      63.3    -126.5        66

Scaled residuals:
    Min      1Q  Median      3Q     Max
-1.4504 -0.3593 -0.1017  0.2668  6.7255

Random effects:
 Groups   Name        Variance Std.Dev.
 NAME     (Intercept) 0.003922 0.06262
 Residual             0.008039 0.08966
Number of obs: 72, groups:  NAME, 12

Fixed effects:
            Estimate Std. Error t value
(Intercept)  0.41513    0.02781  14.927
PHASEPhase2 -0.06971    0.02588  -2.693
PHASEPhase3 -0.10406    0.02588  -4.020
USEDyes      0.40765    0.02113  19.290
anova(m_03_r1_ml, m_04_r1_ml) %>% data.frame
           npar       AIC        BIC   logLik X.2.log.L.    Chisq Df Pr..Chisq.
m_04_r1_ml    6 -114.5487 -100.88874 63.27437  -126.5487       NA NA         NA
m_03_r1_ml    8 -114.8914  -96.67803 65.44568  -130.8914 4.342619  2  0.1140282

Can/must we drop even more?

drop1(m_04_r1_ml, test="Chisq") %>% data.frame
       npar         AIC       LRT      Pr.Chi.
<none>   NA -114.548741        NA           NA
PHASE     2 -103.747665  14.80108 6.109240e-04
USED      1    3.512494 120.06123 6.133776e-28

Nope, that’s it, we’re done; our final model of route 1 is the REML version of m_04_r1_ml:

summary(m_final_r1 <- update( # update
   m_04_r1_ml, .~.,           # this to a model w/
   REML=TRUE),                # REML estimation
   correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 | NAME)
   Data: d

REML criterion at convergence: -103.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-1.4089 -0.3489 -0.0996  0.2593  6.5363

Random effects:
 Groups   Name        Variance Std.Dev.
 NAME     (Intercept) 0.004329 0.06580
 Residual             0.008462 0.09199
Number of obs: 72, groups:  NAME, 12

Fixed effects:
            Estimate Std. Error t value
(Intercept)  0.41513    0.02883  14.402
PHASEPhase2 -0.06971    0.02655  -2.625
PHASEPhase3 -0.10406    0.02655  -3.919
USEDyes      0.40765    0.02168  18.801

1.3.2.2 Route 2 based on m_01b

First, we switch to an ML version of m_01b:

summary(m_03_r2_ml <- update( # update
   m_01b, .~.,                # this to a model w/
   REML=FALSE),               # ML estimation
   correlation=FALSE)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 + PHASE + USED | NAME) + PHASE:USED
   Data: d

      AIC       BIC    logLik -2*log(L)  df.resid
   -132.2     -93.5      83.1    -166.2        55

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.8519 -0.4668 -0.0032  0.5185  4.1128

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.006848 0.08275
          PHASEPhase2 0.005216 0.07222  -0.95
          PHASEPhase3 0.009385 0.09687  -0.93  1.00
          USEDyes     0.006639 0.08148   0.95 -1.00 -1.00
 Residual             0.003451 0.05875
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.02930  13.197
PHASEPhase2         -0.03658    0.03178  -1.151
PHASEPhase3         -0.05170    0.03684  -1.403
USEDyes              0.46464    0.03359  13.832
PHASEPhase2:USEDyes -0.06626    0.03392  -1.953
PHASEPhase3:USEDyes -0.10473    0.03392  -3.088
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

Then, we determine what, if anything, we can drop:

drop1(m_03_r2_ml, test="Chisq") %>% data.frame
           npar       AIC      LRT   Pr.Chi.
<none>       NA -132.1564       NA        NA
PHASE:USED    2 -127.2731 8.883315 0.0117764

Nope, that’s it, we’re done, which means our final model of route 1 is m_01b:

summary(m_final_r2 <- m_01b, correlation=FALSE); rm(m_01b)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
   Data: d

REML criterion at convergence: -131.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.7305 -0.4470 -0.0031  0.4964  3.9377

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.007472 0.08644
          PHASEPhase2 0.005692 0.07544  -0.95
          PHASEPhase3 0.010240 0.10119  -0.93  1.00
          USEDyes     0.007245 0.08512   0.95 -1.00 -1.00
 Residual             0.003765 0.06136
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.03060  12.635
PHASEPhase2         -0.03658    0.03319  -1.102
PHASEPhase3         -0.05170    0.03848  -1.343
USEDyes              0.46464    0.03509  13.242
PHASEPhase2:USEDyes -0.06626    0.03542  -1.870
PHASEPhase3:USEDyes -0.10473    0.03542  -2.956
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

1.3.2.3 Comparing routes side-by-side

We now have two different models, with different ranef and fixef structures, fitted on the same data set. Let’s compare how much they differ – how?

First, we might look at the predictions of the two models:

plot(pch=substr(d$NAME, 1, 1),
   xlim=c(0, 1.6), x=predict(m_final_r1), # predict's default
   ylim=c(0, 1.6), y=predict(m_final_r2)) # uses random effects
grid(); abline(0, 1)
text(1.25, 0.25, round(cor(predict(m_final_r1), predict(m_final_r2)), 3))
Figure 4: Comparing the predictions of the two models

Second, we might compare what the predictors are doing in each:

summary(m_final_r1, correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ PHASE + USED + (1 | NAME)
   Data: d

REML criterion at convergence: -103.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-1.4089 -0.3489 -0.0996  0.2593  6.5363

Random effects:
 Groups   Name        Variance Std.Dev.
 NAME     (Intercept) 0.004329 0.06580
 Residual             0.008462 0.09199
Number of obs: 72, groups:  NAME, 12

Fixed effects:
            Estimate Std. Error t value
(Intercept)  0.41513    0.02883  14.402
PHASEPhase2 -0.06971    0.02655  -2.625
PHASEPhase3 -0.10406    0.02655  -3.919
USEDyes      0.40765    0.02168  18.801
   plot(allEffects(m_final_r1), ylim=c(0, 1.6), grid=TRUE)
summary(m_final_r2, correlation=FALSE)
Linear mixed model fit by REML ['lmerMod']
Formula: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
   Data: d

REML criterion at convergence: -131.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.7305 -0.4470 -0.0031  0.4964  3.9377

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.007472 0.08644
          PHASEPhase2 0.005692 0.07544  -0.95
          PHASEPhase3 0.010240 0.10119  -0.93  1.00
          USEDyes     0.007245 0.08512   0.95 -1.00 -1.00
 Residual             0.003765 0.06136
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error t value
(Intercept)          0.38664    0.03060  12.635
PHASEPhase2         -0.03658    0.03319  -1.102
PHASEPhase3         -0.05170    0.03848  -1.343
USEDyes              0.46464    0.03509  13.242
PHASEPhase2:USEDyes -0.06626    0.03542  -1.870
PHASEPhase3:USEDyes -0.10473    0.03542  -2.956
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
   plot(allEffects(m_final_r2), ylim=c(0, 1.6), grid=TRUE)
Figure 5: Comparing the predictors in the two models
Figure 6: Comparing the predictors in the two models

These effects look quite similar: in both models,

  • there is a small ordinal-looking effect of PHASE;
  • there is a huge effect of with USED: yes seemingly twice as high as USED: no.

And it seems that the main difference is one child, namely Ruth. For didactic reasons only, we will now go with m_final_r2, because I want to spend time on interactions etc.

1.3.3 Modeling & numerical interpretation 2

We first generate a summary output of the final model that provides p-values for the fixef coefficients. Note that I am not fully loading the package lmerTest with library – I am using lmerTest::lmer via what is called explicit namespacing so that, when I wrote lmer, R keeps using lme4::lmer, not lmerTest::lmer:

summary(lmerTest::lmer(formula(m_final_r2), data=d), correlation=FALSE)
Linear mixed model fit by REML. t-tests use Satterthwaite's method [
lmerModLmerTest]
Formula: formula(m_final_r2)
   Data: d

REML criterion at convergence: -131.7

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.7305 -0.4470 -0.0031  0.4964  3.9377

Random effects:
 Groups   Name        Variance Std.Dev. Corr
 NAME     (Intercept) 0.007472 0.08644
          PHASEPhase2 0.005692 0.07544  -0.95
          PHASEPhase3 0.010240 0.10119  -0.93  1.00
          USEDyes     0.007245 0.08512   0.95 -1.00 -1.00
 Residual             0.003765 0.06136
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                    Estimate Std. Error       df t value Pr(>|t|)
(Intercept)          0.38664    0.03060 14.36682  12.635 3.56e-09 ***
PHASEPhase2         -0.03658    0.03319 21.87478  -1.102  0.28238
PHASEPhase3         -0.05170    0.03848 17.88884  -1.343  0.19593
USEDyes              0.46464    0.03509 23.83609  13.242 1.76e-12 ***
PHASEPhase2:USEDyes -0.06626    0.03542 43.99976  -1.870  0.06810 .
PHASEPhase3:USEDyes -0.10473    0.03542 43.99976  -2.956  0.00499 **
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')

Also, we should maybe generate nd again. Note that we also add a column PREDS_backtr where we transform the arcsine-transformed values back to the original scale; this can sometimes be useful for follow-up comparisons.

nd <- expand.grid(
   PHASE=levels(d$PHASE), # all levels of PHASE
   USED=levels(d$USED))   # all levels of USED
nd <- cbind(nd, PREDS=predict( # note: now we're not using ranefs
   m_final_r2, re.form=NA, newdata=nd)) # for the predictions!
nd$PREDS_backtr <- sin(nd$PREDS)
tapply(nd$PREDS, list(nd$USED, nd$PHASE), c)
       Phase1    Phase2    Phase3
no  0.3866357 0.3500527 0.3349400
yes 0.8512803 0.7484416 0.6948561

And we can generate a first set of plots for this model (again, in this case):

plot(effect("PHASE:USED", m_final_r2),
   ylim=c(0, 1.6), grid=TRUE)
Figure 7: The interaction of PHASE and USED in the current final model

This should make you wonder …

Think back to fixed-effects modeling only: is the difference between PHASE: 2 and PHASE: 3 significant ?

1.3.4 Exploring conflation 1: glht and emmeans

This is how we can compare PHASE: 2 and PHASE: 3 for each level of USED with multcomp::glht:

summary(glht(m_final_r2, matrix(c(0,1,-1,0,0, 0), nrow=1))) # when USED is no

     Simultaneous Tests for General Linear Hypotheses

Fit: lmer(formula = OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE +
    USED | NAME), data = d)

Linear Hypotheses:
       Estimate Std. Error z value Pr(>|z|)
1 == 0  0.01511    0.02617   0.578    0.564
(Adjusted p values reported -- single-step method)
summary(glht(m_final_r2, matrix(c(0,1,-1,0,1,-1), nrow=1))) # when USED is yes

     Simultaneous Tests for General Linear Hypotheses

Fit: lmer(formula = OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE +
    USED | NAME), data = d)

Linear Hypotheses:
       Estimate Std. Error z value Pr(>|z|)
1 == 0  0.05359    0.02617   2.048   0.0406 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

And this is how we can compare PHASE: 2 and PHASE: 3 for each level of USED with emmeans::emmeans; first, with unadjusted p-values, …

pairs(emmeans(
   m_final_r2,
   ~ PHASE | USED,
   lmer.df="satterthwaite"),
   adjust="none")
USED = no:
 contrast        estimate     SE   df t.ratio p.value
 Phase1 - Phase2   0.0366 0.0332 21.9   1.102  0.2824
 Phase1 - Phase3   0.0517 0.0385 17.9   1.343  0.1959
 Phase2 - Phase3   0.0151 0.0262 39.0   0.578  0.5669

USED = yes:
 contrast        estimate     SE   df t.ratio p.value
 Phase1 - Phase2   0.1028 0.0332 21.9   3.098  0.0053
 Phase1 - Phase3   0.1564 0.0385 17.9   4.065  0.0007
 Phase2 - Phase3   0.0536 0.0262 39.0   2.048  0.0473

Degrees-of-freedom method: satterthwaite 

… but we should probably use adjusted values here; as usual, I recommend Holm’s method:

pairs(emmeans(
   m_final_r2,
   ~ PHASE | USED,
   lmer.df="satterthwaite"),
   adjust="holm")
USED = no:
 contrast        estimate     SE   df t.ratio p.value
 Phase1 - Phase2   0.0366 0.0332 21.9   1.102  0.5878
 Phase1 - Phase3   0.0517 0.0385 17.9   1.343  0.5878
 Phase2 - Phase3   0.0151 0.0262 39.0   0.578  0.5878

USED = yes:
 contrast        estimate     SE   df t.ratio p.value
 Phase1 - Phase2   0.1028 0.0332 21.9   3.098  0.0105
 Phase1 - Phase3   0.1564 0.0385 17.9   4.065  0.0022
 Phase2 - Phase3   0.0536 0.0262 39.0   2.048  0.0473

Degrees-of-freedom method: satterthwaite
P value adjustment: holm method for 3 tests 

And this is how we can compare PHASE: 2 and PHASE: 3 for each level of USED with multcomp::glht (but this doesn’t use Satterthwaite’s df correction):

summary(glht(m_final_r2, matrix(c( # when USED is no
   0,  # intercept
   1,  # PHASEPhase2
  -1,  # PHASEPhase3
   0,  # USEDyes
   0,  # PHASEPhase2:USEDyes
   0), # PHASEPhase3:USEDyes
  nrow=1)))

     Simultaneous Tests for General Linear Hypotheses

Fit: lmer(formula = OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE +
    USED | NAME), data = d)

Linear Hypotheses:
       Estimate Std. Error z value Pr(>|z|)
1 == 0  0.01511    0.02617   0.578    0.564
(Adjusted p values reported -- single-step method)
summary(glht(m_final_r2, matrix(c( # when USED is yes
   0,  # intercept
   1,  # PHASEPhase2
  -1,  # PHASEPhase3
   0,  # USEDyes
   1,  # PHASEPhase2:USEDyes
  -1), # PHASEPhase3:USEDyes
  nrow=1)))

     Simultaneous Tests for General Linear Hypotheses

Fit: lmer(formula = OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE +
    USED | NAME), data = d)

Linear Hypotheses:
       Estimate Std. Error z value Pr(>|z|)
1 == 0  0.05359    0.02617   2.048   0.0406 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

Tricky: the difference between PHASE: 2 and PHASE: 3 is post-hoc significant when USED is yes, but not when USED is no.

Something you might want to explore later:

lmerTest::difflsmeans(lmerTest::lmer(formula(m_final_r2), data=d))

1.3.5 Exploring conflation 2: model comparison/selection

The other, more general, way of testing for the conflations of phases 2 and 3 involves model comparison/selection. To that end, we again begin by creating a version of the variable of interest that has the levels we want to test for conflation conflated:

d$PHASE_confl <- d$PHASE # create a copy of PHASE called PHASE_confl
levels(d$PHASE_confl) <- c(         # set its levels
   "Phase1", "Phase2&3", "Phase2&3") # w/ conflation
table(d$PHASE, d$PHASE_confl)     # check conflation

         Phase1 Phase2&3
  Phase1     24        0
  Phase2      0       24
  Phase3      0       24
summary(d <- d[,c(1:4,7,5:6)])
      CASE            NAME     OVERLAP.asin       PHASE      PHASE_confl
 Min.   : 1.00   Anne   : 6   Min.   :0.2630   Phase1:24   Phase1  :24
 1st Qu.:18.75   Aran   : 6   1st Qu.:0.3549   Phase2:24   Phase2&3:48
 Median :36.50   Becky  : 6   Median :0.5071   Phase3:24
 Mean   :36.50   Carl   : 6   Mean   :0.5610
 3rd Qu.:54.25   Dominic: 6   3rd Qu.:0.7242
 Max.   :72.00   Gail   : 6   Max.   :1.5708
                 (Other):36
  USED       OVERLAP
 no :36   Min.   :0.2600
 yes:36   1st Qu.:0.3475
          Median :0.4850
          Mean   :0.5158
          3rd Qu.:0.6625
          Max.   :1.0000
                          

With this, we can now create a model that differs from our current final model m_final_r2 by using this new predictor instead. Note, we must also change the ranef structure for this, which means that the model differs from m_final_r2 in both the fixef and ranef parts.

summary(m_03_r2_ml <- lmer(
   OVERLAP.asin ~ 1 +              # intercept
      USED*PHASE_confl +           # fixef now w/ PHASE_confl
      (1 + USED+PHASE_confl|NAME), # ranef now w/ PHASE_confl
   REML=FALSE, data=d), correlation=FALSE)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: OVERLAP.asin ~ 1 + USED * PHASE_confl + (1 + USED + PHASE_confl |
    NAME)
   Data: d

      AIC       BIC    logLik -2*log(L)  df.resid
   -137.3    -112.2      79.6    -159.3        61

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.9291 -0.4469  0.0498  0.4995  3.9474

Random effects:
 Groups   Name                Variance Std.Dev. Corr
 NAME     (Intercept)         0.006754 0.08218
          USEDyes             0.006366 0.07979   0.97
          PHASE_conflPhase2&3 0.006868 0.08288  -0.96 -1.00
 Residual                     0.004046 0.06361
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                            Estimate Std. Error t value
(Intercept)                  0.38664    0.03000  12.888
USEDyes                      0.46464    0.03471  13.386
PHASE_conflPhase2&3         -0.04414    0.03284  -1.344
USEDyes:PHASE_conflPhase2&3 -0.08549    0.03181  -2.688

So what does the model comparisons, which we do with refit=TRUE, because we’re focusing on the fixef part, say?

anova(m_final_r2, m_03_r2_ml, refit=TRUE)
Data: d
Models:
m_03_r2_ml: OVERLAP.asin ~ 1 + USED * PHASE_confl + (1 + USED + PHASE_confl | NAME)
m_final_r2: OVERLAP.asin ~ 1 + PHASE * USED + (1 + PHASE + USED | NAME)
           npar     AIC      BIC logLik -2*log(L)  Chisq Df Pr(>Chisq)
m_03_r2_ml   11 -137.28 -112.234 79.639   -159.28
m_final_r2   17 -132.16  -93.453 83.078   -166.16 6.8791  6     0.3322

The LR-test is not significant, it seems phases 2 and 3 can/should be conflated. But can/must the interaction now be deleted again?

drop1(m_03_r2_ml, test="Chisq")
Single term deletions

Model:
OVERLAP.asin ~ 1 + USED * PHASE_confl + (1 + USED + PHASE_confl |
    NAME)
                 npar     AIC    LRT  Pr(Chi)
<none>                -137.28
USED:PHASE_confl    1 -132.55 6.7308 0.009476 **
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Nope, the interaction stays, which means that our REAL final model (of the didactially-motivated route with the warnings and the fixef interaction) is the REML version of m_03_r2_ml:

m_final <- update(  # update
   m_03_r2_ml, .~., # this to a model w/
   REML=TRUE)       # REML estimation

1.3.6 Diagnostics

The number-of-data points requirements are different for mixed-effects models than for fixed-effects only linear models; I won’t discuss them here but you can read SFLWR3: Section 6.3 for a bit on that.

1.3.6.1 Exploring residuals generally

How would we check the residuals in general?

hist(residuals(m_final), breaks="FD")
Figure 8: The residuals of the current final model

This looks pretty good in general, even though three values ‘stick out’.

1.3.6.2 Exploring residuals per kid

How would we check the residuals per child? (I’m doing this with all 3 levels of PHASE.) Now we see which kid sticks out:

plot(residuals(m_final), type="h", ylim=c(-0.3, 0.3),
     col=rainbow(length(levels(d$NAME)))[as.numeric(d$NAME)]); grid()
text(d$CASE,
     residuals(m_final),
     substr(as.character(d$PHASE), nchar(as.character(d$PHASE)), nchar(as.character(d$PHASE))),
     col=rainbow(length(levels(d$NAME)))[as.numeric(d$NAME)])
text(tapply(d$CASE, d$NAME, mean), 0.2, levels(d$NAME),
     col=rainbow(length(levels(d$NAME)))[seq(length(levels(d$NAME)))])
Figure 9: The residuals of the final model (per child)

Doing some more detailed exploration of whether Ruth’s data ‘do damage’ to the model might be useful …

1.3.6.3 Exploring residuals against fitted

How would we generate the usual diagnostic plot that plot(lm(...)) would generate, residuals vs. fitted? With the exception of the by now familiar 3 points from Ruth, this looks pretty good:

plot(m_final, type=c("p", "smooth"), id=0.05)
Figure 10: Residuals against fitted for the final model

One can sometimes also do the following, but in this case this is not useful:

plot(m_final, resid(.) ~ fitted(.) | NAME, abline=c(h=0), lty=1, type=c("p", "smooth"))

1.3.6.4 Exploring residuals against predictors

Finally, let’s see whether there’s anything noteworthy when we plot residuals against predictors

stripchart(residuals(m_final) ~ d$PHASE_confl, method="jitter"); grid(); abline(v=0)
stripchart(residuals(m_final) ~ d$USED       , method="jitter"); grid(); abline(v=0)
Figure 11: Residuals against predictors for the final model
Figure 12: Residuals against predictors for the final model

1.3.6.5 Excursus: looking for influential observations

Something to check for later: You can either use the influence.ME package, …

influence.ME::pchange(influence.ME::influence(m_final, group="NAME")) # percentage change
influence.ME::sigtest(influence.ME::influence(m_final, group="NAME")) # does sign change?

… or you can use something like this (which I haven’t played around much yet):

influence(m_final)

1.3.7 Modeling & numerical interpretation 3

Let’s generate all numerical results again in one place:

summary(lmerTest::lmer(formula(m_final), data=d), correlation=FALSE)
Linear mixed model fit by REML. t-tests use Satterthwaite's method [
lmerModLmerTest]
Formula: formula(m_final)
   Data: d

REML criterion at convergence: -136.2

Scaled residuals:
    Min      1Q  Median      3Q     Max
-2.8742 -0.4263  0.0502  0.4692  3.8266

Random effects:
 Groups   Name                Variance Std.Dev. Corr
 NAME     (Intercept)         0.007403 0.08604
          USEDyes             0.007006 0.08370   0.96
          PHASE_conflPhase2&3 0.007567 0.08699  -0.96 -1.00
 Residual                     0.004222 0.06498
Number of obs: 72, groups:  NAME, 12

Fixed effects:
                            Estimate Std. Error       df t value Pr(>|t|)
(Intercept)                  0.38664    0.03112 14.79425  12.422 3.16e-09 ***
USEDyes                      0.46464    0.03588 25.53028  12.949 1.01e-12 ***
PHASE_conflPhase2&3         -0.04414    0.03403 18.69062  -1.297   0.2104
USEDyes:PHASE_conflPhase2&3 -0.08549    0.03249 45.99964  -2.631   0.0115 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
drop1(m_final, test="Chisq") %>% data.frame
                 npar       AIC      LRT    Pr.Chi.
<none>             NA -137.2774       NA         NA
USED:PHASE_confl    1 -132.5466 6.730773 0.00947635

The overall significance test of m_final can be conducted via model comparison again: we compare m_final to a model that has the same ranef structure but no fixefs but the overall intercept. That way, an LRT with anova will determine the p-value of all fixefs together:

m_null_re <- lmer(               # create a null model, now with ranefs:
   OVERLAP.asin ~ 1 +            # no fixefs but an overall intercept plus
      (1+USED+PHASE_confl|NAME), # the ranef structure of the final model
      data=d)
anova(m_final,      # LRT comparison of the final model
      m_null_re,    # with this null model
      test="Chisq") %>% data.frame # with ML! The diff is ***
          npar        AIC        BIC   logLik X.2.log.L.    Chisq Df
m_null_re    8  -92.89176  -74.67843 54.44588  -108.8918       NA NA
m_final     11 -137.27735 -112.23402 79.63868  -159.2774 50.38559  3
           Pr..Chisq.
m_null_re          NA
m_final   6.61268e-11
pchisq(50.3859, 3, lower.tail=FALSE)
[1] 6.611678e-11

You can also compute m_final’s relative likelihood:

LMERConvenienceFunctions::relLik( # how much more likely
   m_final,   # m_final is the right model than
   m_null_re) # m_null_re (i.e., insanely likely)
       AIC(x)        AIC(y)          diff        relLik
   -114.15055     -86.06589     -28.08467 1254606.15889 

That statistic is computed like this:

1/(exp((AIC(m_final)-AIC(m_null_re))/2))
[1] 1254606
# (AIC(m_final)-AIC(m_null_re)) %>% "/"(2) %>% exp %>% "^"(-1)

Here are three ways to compute confidence intervals. First, the fastest using car::Confint, which by default only provides CIs for the fixefs:

(m_final_ci.car <- Confint(m_final))
                               Estimate      2.5 %      97.5 %
(Intercept)                  0.38663568  0.3256326  0.44763879
USEDyes                      0.46464458  0.3943171  0.53497210
PHASE_conflPhase2&3         -0.04413936 -0.1108460  0.02256725
USEDyes:PHASE_conflPhase2&3 -0.08549204 -0.1491684 -0.02181571

Then, there is this one using base::confint, which already takes quite a bit longer even for a ridiculously small model like this one, but also provides CIs for the ranefs:

(m_final_ci.prof <- confint(m_final, method="profile"))
                                  2.5 %      97.5 %
.sig01                       0.04662071  0.14358556
.sig02                       0.52899284  1.00000000
.sig03                      -1.00000000 -0.95989964
.sig04                       0.04351837  0.14040648
.sig05                      -1.00000000 -0.78013323
.sig06                       0.04580012  0.14643319
.sigma                       0.05320387  0.07848211
(Intercept)                  0.32693621  0.44633481
USEDyes                      0.39588558  0.53339757
PHASE_conflPhase2&3         -0.10940341  0.02254974
USEDyes:PHASE_conflPhase2&3 -0.14866057 -0.02232274

And the best option is this one using confint.merMod, but it can take quite a bit longer; you should always immediately save the results:

set.seed(sum(utf8ToInt("Räucherforelle"))) # set a replicable random number seed
(m_final_ci.boot <- confint(m_final,
   method="boot", nsim=200,
   parallel="snow", ncpus=10))
                                  2.5 %      97.5 %
.sig01                       0.04303766  0.13712142
.sig02                       0.40830615  1.00000000
.sig03                      -1.00000000 -0.75086927
.sig04                       0.03319866  0.13478571
.sig05                      -0.99999999 -0.60140364
.sig06                       0.03778551  0.13550258
.sigma                       0.04848854  0.07431800
(Intercept)                  0.32296989  0.45419935
USEDyes                      0.37979548  0.53984879
PHASE_conflPhase2&3         -0.12301453  0.01747758
USEDyes:PHASE_conflPhase2&3 -0.14823388 -0.01482163
save.image(file="_output/204_05_mfinal1&confints.RData")

Also, we want R2-values, which are all too often nor provided (for reasons I don’t understand):

r.squaredGLMM(m_final)
           R2m       R2c
[1,] 0.7770789 0.9258445

Very nice: high R2marginal and R2conditional doesn’t provide that much more variance explanation.

Here’s the most straightforward ways to extract fixefs and ranefs (in a way similar to what coef did for lm/glm):

(fixdeffs <- fixef(m_final)) # same as coef(lm(...))
                (Intercept)                     USEDyes
                 0.38663568                  0.46464458
        PHASE_conflPhase2&3 USEDyes:PHASE_conflPhase2&3
                -0.04413936                 -0.08549204 
(randeffs <- ranef(m_final, condVar=TRUE, drop=TRUE)) # a ranef.mer object
$NAME
         (Intercept)      USEDyes PHASE_conflPhase2&3
Anne     0.004305329 -0.010205604         0.011554728
Aran     0.036573260  0.012466399        -0.011356669
Becky   -0.054854113 -0.049169649         0.050686024
Carl    -0.010089150 -0.003105923         0.002765024
Dominic  0.024277705  0.024772835        -0.025758262
Gail    -0.063074215 -0.064867835         0.067480957
Joel    -0.083359986 -0.064820085         0.066090920
John    -0.005391482 -0.010260059         0.010975623
Liz     -0.050961009 -0.031858427         0.031824430
Nicole  -0.022998766 -0.020733795         0.021381923
Ruth     0.235620736  0.235252461        -0.244276222
Warren  -0.010048308 -0.017470317         0.018631524

with conditional variances for "NAME" 
(randeffs <- ranef(m_final, condVar=TRUE, drop=TRUE)$NAME) # a data frame
         (Intercept)      USEDyes PHASE_conflPhase2&3
Anne     0.004305329 -0.010205604         0.011554728
Aran     0.036573260  0.012466399        -0.011356669
Becky   -0.054854113 -0.049169649         0.050686024
Carl    -0.010089150 -0.003105923         0.002765024
Dominic  0.024277705  0.024772835        -0.025758262
Gail    -0.063074215 -0.064867835         0.067480957
Joel    -0.083359986 -0.064820085         0.066090920
John    -0.005391482 -0.010260059         0.010975623
Liz     -0.050961009 -0.031858427         0.031824430
Nicole  -0.022998766 -0.020733795         0.021381923
Ruth     0.235620736  0.235252461        -0.244276222
Warren  -0.010048308 -0.017470317         0.018631524
VarCorr(m_final)
 Groups   Name                Std.Dev. Corr
 NAME     (Intercept)         0.086040
          USEDyes             0.083703  0.963
          PHASE_conflPhase2&3 0.086990 -0.958 -1.000
 Residual                     0.064977              

Note: often, this representation can be useful. Either you retrieve all values as one data frame …

(randeffs.split <- m_final %>% ranef(condVar=TRUE) %>% data.frame)
   grpvar                term     grp      condval     condsd
1    NAME         (Intercept)    Anne  0.004305329 0.02462173
2    NAME         (Intercept)    Aran  0.036573260 0.02462173
3    NAME         (Intercept)   Becky -0.054854113 0.02462173
4    NAME         (Intercept)    Carl -0.010089150 0.02462173
5    NAME         (Intercept) Dominic  0.024277705 0.02462173
6    NAME         (Intercept)    Gail -0.063074215 0.02462173
7    NAME         (Intercept)    Joel -0.083359986 0.02462173
8    NAME         (Intercept)    John -0.005391482 0.02462173
9    NAME         (Intercept)     Liz -0.050961009 0.02462173
10   NAME         (Intercept)  Nicole -0.022998766 0.02462173
11   NAME         (Intercept)    Ruth  0.235620736 0.02462173
12   NAME         (Intercept)  Warren -0.010048308 0.02462173
13   NAME             USEDyes    Anne -0.010205604 0.02722676
14   NAME             USEDyes    Aran  0.012466399 0.02722676
15   NAME             USEDyes   Becky -0.049169649 0.02722676
16   NAME             USEDyes    Carl -0.003105923 0.02722676
17   NAME             USEDyes Dominic  0.024772835 0.02722676
18   NAME             USEDyes    Gail -0.064867835 0.02722676
19   NAME             USEDyes    Joel -0.064820085 0.02722676
20   NAME             USEDyes    John -0.010260059 0.02722676
21   NAME             USEDyes     Liz -0.031858427 0.02722676
22   NAME             USEDyes  Nicole -0.020733795 0.02722676
23   NAME             USEDyes    Ruth  0.235252461 0.02722676
24   NAME             USEDyes  Warren -0.017470317 0.02722676
25   NAME PHASE_conflPhase2&3    Anne  0.011554728 0.02892198
26   NAME PHASE_conflPhase2&3    Aran -0.011356669 0.02892198
27   NAME PHASE_conflPhase2&3   Becky  0.050686024 0.02892198
28   NAME PHASE_conflPhase2&3    Carl  0.002765024 0.02892198
29   NAME PHASE_conflPhase2&3 Dominic -0.025758262 0.02892198
30   NAME PHASE_conflPhase2&3    Gail  0.067480957 0.02892198
31   NAME PHASE_conflPhase2&3    Joel  0.066090920 0.02892198
32   NAME PHASE_conflPhase2&3    John  0.010975623 0.02892198
33   NAME PHASE_conflPhase2&3     Liz  0.031824430 0.02892198
34   NAME PHASE_conflPhase2&3  Nicole  0.021381923 0.02892198
35   NAME PHASE_conflPhase2&3    Ruth -0.244276222 0.02892198
36   NAME PHASE_conflPhase2&3  Warren  0.018631524 0.02892198

… or as a list of data frames:

(randeffs.split <- split(randeffs.split, randeffs.split$term))
$`(Intercept)`
   grpvar        term     grp      condval     condsd
1    NAME (Intercept)    Anne  0.004305329 0.02462173
2    NAME (Intercept)    Aran  0.036573260 0.02462173
3    NAME (Intercept)   Becky -0.054854113 0.02462173
4    NAME (Intercept)    Carl -0.010089150 0.02462173
5    NAME (Intercept) Dominic  0.024277705 0.02462173
6    NAME (Intercept)    Gail -0.063074215 0.02462173
7    NAME (Intercept)    Joel -0.083359986 0.02462173
8    NAME (Intercept)    John -0.005391482 0.02462173
9    NAME (Intercept)     Liz -0.050961009 0.02462173
10   NAME (Intercept)  Nicole -0.022998766 0.02462173
11   NAME (Intercept)    Ruth  0.235620736 0.02462173
12   NAME (Intercept)  Warren -0.010048308 0.02462173

$USEDyes
   grpvar    term     grp      condval     condsd
13   NAME USEDyes    Anne -0.010205604 0.02722676
14   NAME USEDyes    Aran  0.012466399 0.02722676
15   NAME USEDyes   Becky -0.049169649 0.02722676
16   NAME USEDyes    Carl -0.003105923 0.02722676
17   NAME USEDyes Dominic  0.024772835 0.02722676
18   NAME USEDyes    Gail -0.064867835 0.02722676
19   NAME USEDyes    Joel -0.064820085 0.02722676
20   NAME USEDyes    John -0.010260059 0.02722676
21   NAME USEDyes     Liz -0.031858427 0.02722676
22   NAME USEDyes  Nicole -0.020733795 0.02722676
23   NAME USEDyes    Ruth  0.235252461 0.02722676
24   NAME USEDyes  Warren -0.017470317 0.02722676

$`PHASE_conflPhase2&3`
   grpvar                term     grp      condval     condsd
25   NAME PHASE_conflPhase2&3    Anne  0.011554728 0.02892198
26   NAME PHASE_conflPhase2&3    Aran -0.011356669 0.02892198
27   NAME PHASE_conflPhase2&3   Becky  0.050686024 0.02892198
28   NAME PHASE_conflPhase2&3    Carl  0.002765024 0.02892198
29   NAME PHASE_conflPhase2&3 Dominic -0.025758262 0.02892198
30   NAME PHASE_conflPhase2&3    Gail  0.067480957 0.02892198
31   NAME PHASE_conflPhase2&3    Joel  0.066090920 0.02892198
32   NAME PHASE_conflPhase2&3    John  0.010975623 0.02892198
33   NAME PHASE_conflPhase2&3     Liz  0.031824430 0.02892198
34   NAME PHASE_conflPhase2&3  Nicole  0.021381923 0.02892198
35   NAME PHASE_conflPhase2&3    Ruth -0.244276222 0.02892198
36   NAME PHASE_conflPhase2&3  Warren  0.018631524 0.02892198

What will we use those results for? Predictions and interpretation! Here, we create predictions for all/the predictors and their levels defined manually; note, as above, we’re using predict with re.form=NA, which computes predictions without the random effects, i.e. the predictions of the model we would apply to new kids (predict(..., re.form=NULL) is the default, which uses all random effects; see also predict’s argument allow.new.levels):

nd <- expand.grid(
   PHASE_confl=levels(d$PHASE_confl), # all levels of PHASE_confl
   USED=levels(d$USED))               # all levels of USED
nd <- cbind(nd, PREDS=predict( # note: now we're not using ranefs
   m_final, re.form=NA, newdata=nd))  # for the predictions!
nd$PREDS_backtr <- sin(nd$PREDS)
tapply(nd$PREDS, list(nd$USED, nd$PHASE_confl), c)
       Phase1  Phase2&3
no  0.3866357 0.3424963
yes 0.8512803 0.7216489

The predictions for all/the predictors and their levels from effects are the same, given the perfectly balanced nature of the data:

usph_d <- data.frame(               # usph_d the data frame of
   usph <- effect(                  # usph, the effect of
      "USED:PHASE_confl", m_final)) # USED:PHASE_confl in m_final
usph_d$fit.backtr <- sin(usph_d$fit)
usph_d
  USED PHASE_confl       fit         se     lower     upper fit.backtr
1   no      Phase1 0.3866357 0.03112461 0.3245275 0.4487439  0.3770746
2  yes      Phase1 0.8512803 0.05204677 0.7474226 0.9551380  0.7521247
3   no    Phase2&3 0.3424963 0.01509285 0.3123790 0.3726136  0.3358395
4  yes    Phase2&3 0.7216489 0.02746908 0.6668352 0.7764625  0.6606234

Let’s make ‘predictions’ for all cases based on fixefs only:

d$PRED_fe <- predict( # make d$PRED_fe the result of predicting
   m_final,           # from m_final
   re.form=NA)        # use no ranefs

Let’s make ‘predictions’ for all cases based on fixed and random effects:

d$PRED_ae <- predict( # make d$PRED_ae the result of predicting
   m_final)           # from m_final
head(d)
  CASE NAME OVERLAP.asin  PHASE PHASE_confl USED OVERLAP   PRED_fe   PRED_ae
1    1 Anne    0.8183220 Phase1      Phase1  yes    0.73 0.8512803 0.8453800
2    2 Anne    0.7342088 Phase2    Phase2&3  yes    0.67 0.7216489 0.7273033
3    3 Anne    0.7075844 Phase3    Phase2&3  yes    0.65 0.7216489 0.7273033
4    4 Anne    0.4555987 Phase1      Phase1   no    0.44 0.3866357 0.3909410
5    5 Anne    0.3897963 Phase2    Phase2&3   no    0.38 0.3424963 0.3583564
6    6 Anne    0.4115168 Phase3    Phase2&3   no    0.40 0.3424963 0.3583564

It seems like the ranefs are doing quite a bit of work:

plot(d$PRED_ae, d$PRED_fe, xlim=c(0, 1), ylim=c(0, 1),
   pch=16, col=ifelse(d$USED=="yes", "#00FF0020", "#FF000020"))
   grid(); abline(0, 1, lty=3)
Figure 13: Fixed- vs. all-effects predictions of the final model

1.4 Visual interpretation

1.4.1 Fixed effects

We can always begin with a standard effects plot for the fixed effects:

plot(usph, ylim=c(0, 1.6), grid=TRUE)
plot(usph, ylim=c(0, 1.6), multiline=TRUE, grid=TRUE)
plot(usph, ylim=c(0, 1.6), multiline=TRUE, confint=list(style="auto"), grid=TRUE)
Figure 14: The interaction of PHASE & USED in the final model 1
Figure 15: The interaction of PHASE & USED in the final model 1
Figure 16: The interaction of PHASE & USED in the final model 1

And remember: always also plot the reverse perspective:

plot(usph, ylim=c(0, 1.6), x.var="PHASE_confl", grid=TRUE)
plot(usph, ylim=c(0, 1.6), x.var="PHASE_confl", multiline=TRUE, grid=TRUE)
plot(usph, ylim=c(0, 1.6), x.var="PHASE_confl", multiline=TRUE, confint=list(style="auto"), grid=TRUE)
Figure 17: The interaction of PHASE & USED in the final model 2
Figure 18: The interaction of PHASE & USED in the final model 2
Figure 19: The interaction of PHASE & USED in the final model 2

But let’s now do a more comprehensive plot for the fixed effects. First, we generate a version of nd that actually includes the random effect of NAME.

nd_c <- expand.grid( # create all combinations of all levels of
   NAME=levels(d$NAME),               # NAME
   PHASE_confl=levels(d$PHASE_confl), # PHASE_confl
   USED=levels(d$USED))               # USED

Then, we add the predictions, this time including the random effects:

nd_c <- data.frame(     # make new nd_c
   nd_c,                # the old nd_c
   PREDICTIONS=predict( # plus predictions
      m_final,          # based on the final model
      newdata=nd_c,     # for nd_c
      re.form=NULL))    # using fixed & random effects
nd_c.split <- split( # split
   nd_c,             # nd_c up for
   nd_c$USED)        # plotting separate panels for USED

Finally, we plot everything, with separate predictions and slopes for each child and a right y-axis for the original scale of the response variable:

op <- par(no.readonly=TRUE); par(mar=c(2,4,5,4)); par(mfrow=c(1,2))
plot(c(0.8,2.2), c(0,1.7), type="n", axes=FALSE,
   xlab="PHASE (conflated)", ylab="Arcsine-transformed OVERLAP",
   main="The effect of PHASE_conflated on the predicted %\nof OVERLAP (when USED='no')")
   axis(1, at=1:2, labels=c("Phase 1","Phase 2&3")); axis(2); grid()
   axis(4, at=axTicks(2), labels=round(sin(axTicks(2)), 1))
   for (i in 1:12) {
      lines(1:2, split(nd_c.split[[1]], nd_c.split[[1]]$NAME)[[i]]$PREDICTIONS, col="darkgrey")
   }
   lines(1:2, usph$fit[c(1,3)], lwd=5)
   points(pch=16, col="#0000FF30",
      x=jitter(as.numeric(d[d$USED=="no","PHASE_confl"]=="Phase2&3")+1, factor=0.5),
      y=d$OVERLAP_asin[d$USED=="no"])
plot(c(0.8,2.2), c(0,1.7), type="n", axes=FALSE,
   xlab="PHASE (conflated)", ylab="Arcsine-transformed OVERLAP",
   main="The effect of PHASE_conflated on the predicted %\nof OVERLAP (when USED='yes')")
   axis(1, at=1:2, labels=c("Phase 1","Phase 2&3")); axis(2); grid()
   axis(4, at=axTicks(2), labels=round(sin(axTicks(2)), 1))
   for (i in 1:12) {
      lines(1:2, split(nd_c.split[[2]], nd_c.split[[2]]$NAME)[[i]]$PREDICTIONS, col="darkgrey")
   }
   lines(1:2, usph$fit[c(2,4)], lwd=5)
   points(pch=16, col="#0000FF30",
      x=jitter(as.numeric(d[d$USED=="yes","PHASE_confl"]=="Phase2&3")+1, factor=0.5),
      y=d$OVERLAP_asin[d$USED=="yes"])
par(mfrow=c(1,1))
par(op)
Figure 20: The interaction of PHASE & USED in the final model 3

Interpretation:

  • when USED is no, the difference between the two time periods of PHASE 1 and PHASE 2&3 is very small (with the value of OVERLAP.asin being a little smaller in the later phase, namely by 0.044);
  • when USED is yes, the difference between the two time periods of PHASE 1 and PHASE 2&3 is a bit bigger (with the value of OVERLAP.asin being smaller in the later phase, namely by 0.13).

1.4.2 Random effects

One particularly nice summary graph that you can use for random effects with not too many levels is this kind of dot chart, which represents the adjustments to the varying intercepts and slopes for each predictor for each level of the random effect together with its 95% confidence interval; obviously, we would want to see that those are roughly normally distributed and it would be nice if many of those confidence intervals overlapped with 0:

lattice::dotplot(ranef(m_final, condVar=TRUE))
$NAME
Figure 21: The adjustments to intercepts and slopes of the final model

More and separate analyses are of course possible, but here not particularly useful, given the small number of kids we had data for:

par(mfrow=c(1,3))
lapply(randeffs.split, \(af) hist(af$condval, main=""))
par(mfrow=c(1,1))

Some recommendations of other potentially (!) useful functions:

1.5 Exercises

1.5.1 Understanding (!) m_final’s predictions

Let’s take a random speaker, say Dominic, and let’s take a random the data point of that speaker, say data point 27 of the whole data set:

d[27,c(1:3,5,7)]
   CASE    NAME OVERLAP.asin PHASE_confl OVERLAP
27   27 Dominic    0.6944983    Phase2&3    0.64

Compute predictions for that data point manually, namely

  • the predicted value of the response variable based on only the fixed effects of m_final;
  • the predicted value of the response variable based on fixed and random effects of m_final;

but do so with just the fixed and random-effects coefficients of the model and without predict or effects or glht etc.

fixef(m_final)       # the fixef coefficients applying to all/new speakers
                (Intercept)                     USEDyes
                 0.38663568                  0.46464458
        PHASE_conflPhase2&3 USEDyes:PHASE_conflPhase2&3
                -0.04413936                 -0.08549204 
randeffs["Dominic",] # the ranef adjustments for Dominic
        (Intercept)    USEDyes PHASE_conflPhase2&3
Dominic  0.02427771 0.02477283         -0.02575826
coef(m_final)$NAME["Dominic",]
        (Intercept)   USEDyes PHASE_conflPhase2&3 USEDyes:PHASE_conflPhase2&3
Dominic   0.4109134 0.4894174         -0.06989762                 -0.08549204
# here is the prediction based only on fixefs:
 0.38663568 + # intercept
 0.46464458 + # USEDyes
-0.04413936 + # PHASE_conflPhase2&3
-0.08549204   # USEDyes:PHASE_conflPhase2&3
[1] 0.7216489
# which returns the same as:
d[27,"PRED_fe"] # or
[1] 0.7216489
predict(m_final, type="response", re.form=NA)[27]
       27
0.7216489 
# here is the prediction based on fixefs *and* ranefs:
 0.38663568 +  0.02427771 + # intercept (sums up to coef 1)
 0.46464458 +  0.02477283 + # USEDyes (sums up to coef 2)
-0.04413936 + -0.02575826 + # PHASE_conflPhase2&3 (sums up to coef 3)
-0.08549204                 # USEDyes:PHASE_conflPhase2&3 (ia term, no ranef there)
[1] 0.7449411
# which returns the same as:
d[27,"PRED_ae"]
[1] 0.7449411
predict(m_final, type="response")[27]
       27
0.7449411 
fitted(m_final)[27]
       27
0.7449411 

1.5.2 Phase 2 & 3 = different a priori?

How would we have proceeded if we had expected in advance that phases 2 and 3 wouldn’t differ from each other?

d$PHASE.contr <- d$PHASE
# PHASE           1     2     3
P1_vs_P2.P3 <- c(2/3, -1/3, -1/3)
P2_vs_P3    <- c( 0 ,  1/2, -1/2)
(contrasts(d$PHASE.contr) <- cbind(P1_vs_P2.P3, P2_vs_P3))
summary(d)

summary(lmerTest::lmer(m_final_r2, data=d), correlation=FALSE)$coefficients # for ease of comparison
summary(m_final_r2_contr <- lmerTest::lmer(
   OVERLAP.asin ~ 1 +              # intercept
      USED*PHASE.contr +           # fixed effects
      (1 + USED+PHASE.contr|NAME), # random effects
   REML=FALSE, data=d), correlation=FALSE)$coefficients
# summary(m_final_r2_contr)$coefficients[6,5]

2 Session info

sessionInfo()
R version 4.5.2 (2025-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Pop!_OS 22.04 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

time zone: America/Los_Angeles
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  compiler  methods
[8] base

other attached packages:
 [1] MuMIn_1.48.11   multcomp_1.4-29 TH.data_1.1-5   MASS_7.3-65
 [5] survival_3.8-6  mvtnorm_1.3-3   lme4_1.1-38     Matrix_1.7-4
 [9] emmeans_2.0.1   effects_4.2-4   car_3.1-5       carData_3.0-6
[13] dplyr_1.2.0     STGmisc_1.06    Rcpp_1.1.1      magrittr_2.0.4

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1             viridisLite_0.4.3
 [3] farver_2.1.2                 fields_17.1
 [5] S7_0.2.1                     fastmap_1.2.0
 [7] digest_0.6.39                dotCall64_1.2
 [9] estimability_1.5.1           lifecycle_1.0.5
[11] rlang_1.1.7                  LMERConvenienceFunctions_3.0
[13] tools_4.5.2                  yaml_2.3.12
[15] knitr_1.51                   htmlwidgets_1.6.4
[17] LCFdata_2.0                  plyr_1.8.9
[19] RColorBrewer_1.1-3           abind_1.4-8
[21] numDeriv_2016.8-1.1          nnet_7.3-20
[23] grid_4.5.2                   stats4_4.5.2
[25] xtable_1.8-4                 colorspace_2.1-2
[27] ggplot2_4.0.2                scales_1.4.0
[29] insight_1.4.6                cli_3.6.5
[31] survey_4.4-8                 rmarkdown_2.30
[33] reformulas_0.4.4             generics_0.1.4
[35] otel_0.2.0                   rstudioapi_0.18.0
[37] reshape2_1.4.5               minqa_1.2.8
[39] DBI_1.2.3                    stringr_1.6.0
[41] splines_4.5.2                maps_3.4.3
[43] parallel_4.5.2               mitools_2.4
[45] vctrs_0.7.1                  boot_1.3-32
[47] sandwich_3.1-1               jsonlite_2.0.0
[49] Formula_1.2-5                glue_1.8.0
[51] spam_2.11-3                  nloptr_2.2.1
[53] codetools_0.2-20             stringi_1.8.7
[55] gtable_0.3.6                 lmerTest_3.2-0
[57] tibble_3.3.1                 pillar_1.11.1
[59] htmltools_0.5.9              R6_2.6.1
[61] Rdpack_2.6.5                 evaluate_1.0.5
[63] lattice_0.22-7               ez_4.4-0
[65] rbibutils_2.4.1              coda_0.19-4.1
[67] nlme_3.1-168                 mgcv_1.9-4
[69] xfun_0.56                    zoo_1.8-15
[71] pkgconfig_2.0.3