Getting started with plmmr

Tabitha Peter

library(plmmr)
#> Loading required package: bigalgebra
#> Loading required package: bigmemory

Introduction

plmmr is a package for fitting Penalized Linear Mixed Models in R. This package was created for the purpose of fitting penalized regression models to high dimensional data in which the observations are correlated. For instance, this kind of data arises often in the context of genetics (e.g., GWAS in which there is population structure and/or family grouping).

The novelties of plmmr are:

  1. Integration: plmmr combines the functionality of several packages in order to do quality control, model fitting/analysis, and data visualization all in one package. For example, if you have GWAS data, plmmr will take you from PLINK files all the way to a list of SNPs for downstream analysis.

  2. Accessibility: plmmr can be run from an R session on a typical desktop or laptop computer. The user does not need access to a supercomputer or have experience with the command line in order to fit models plmmr.

  3. Handling correlation: plmmr uses a transformation that (1) measures correlation among samples and (2) uses this correlation measurement to improve predictions (via the best linear unbiased predictor, or BLUP). This means that in plmm(), there’s no need to filter data down to a ‘maximum subset of unrelated samples.’

Minimal example

Below is a minimal reproducible example of how plmmr can be used:

# library(plmmr)
fit <- plmm(admix$X, admix$y) # admix data ships with package
plot(fit)


cvfit <- cv_plmm(admix$X, admix$y)
plot(cvfit)

summary(cvfit)
#> lasso-penalized model with n=197 and p=101
#> At minimum cross-validation error (lambda=0.2117):
#> -------------------------------------------------
#>   Nonzero coefficients: 7
#>   Cross-validation error (deviance): 1.96
#>   Scale estimate (sigma): 1.399

Computational capability

File-backing

In many applications of high dimensional data analysis, the dataset is too large to read into R – the session will crash for lack of memory. This is particularly common when analyzing data from genome-wide association studies (GWAS). To analyze such large datasets, plmmr is equipped to analyze data using filebacking - a strategy that lets R ‘point’ to a file on disk, rather than reading the file into the R session. Many other packages use this technique - bigstatsr and biglasso are two examples of packages that use the filebacking technique. The package that plmmr uses to create and store filebacked objects is bigmemory. The filebacked computation relies on the biglasso package by Yaohui Zeng et al. and on bigalgebra by Michael Kane et al. For processing PLINK files, we use methods from the bigsnpr package by Florian Privé.

Numeric outcomes only

At this time, the package is designed for linear regression only – that is, we are considering only continuous (numeric) outcomes. We maintain that treating binary outcomes as numeric values is appropriate in some contexts, as described by Hastie et al. in the Elements of Statistical Learning, chapter 4. In the future, we would like to extend this package to handle dichotomous outcomes via logistic regression; the theoretical work underlying this is an open problem.

3 types of penalization

Since we are focused on penalized regression in this package, plmmr offers 3 choices of penalty: the minimax concave (MCP), the smoothly clipped absolute deviation (SCAD), and the least absolute shrinkage and selection operator (LASSO). The implementation of these penalties is built on the concepts/techniques provided in the ncvreg package.

Data size and dimensionality

We distinguish between the data attributes ‘big’ and ‘high dimensional.’ ‘Big’ describes the amount of space data takes up on a computer, while ‘high dimensional’ describes a context where the ratio of features (also called ‘variables’ or ‘predictors’) to observations (e.g., samples) is high. For instance, data with 100 samples and 100 variables is high dimensional, but not big. By contrast, data with 10 million observations and 100 variables is big, but not high dimensional.

plmmr is optimized for data that are high dimensional – the methods we are using to estimate relatedness among observations perform best when there are a high number of features relative to the number of observations.

plmmr is also designed to accommodate data that is too large to analyze in-memory. We accommodate such data through file-backing (as described above). Our current analysis pipeline works well for data files up to about 40 Gb in size. In practice, this means that plmmr is equipped to analyze GWAS data, but not biobank-sized data.

Data input types

plmmr currently works with three types of data input:

  1. Data that is stored in-memory as a matrix or data frame

  2. Data that is stored in PLINK files

  3. Data that is stored in delimited files

Example data sets

plmmr currently includes three example data sets, one for each type of data input.