Friday, May 31, 2013

Regression regularization example

Recently I needed a simple example showing when application of regularization in regression is worthwhile. Here is the code I came up with (along with basic application of parallelization of code execution).

Assume you have 60 observations and 50 explanatory variables x1 to x50. All these variables are IID from uniform distribution on interval [0, 1). Predicted variable y is generated as a sum of variables x1 to x50 and independent random noise N(0, 1).
Our objective is to compare for such data: (a) linear regression on all 50 variables, regressions obtained by variable selection using (b) AIC and (c) BIC criteria and (d) Lasso regularization.
What we do is generate 100 times the training data set and compare the four predictions against known expected value of y for 10 000 randomly selected values of explanatory variables. We use mean squared deviation of the prediction from the mean (thus for ideal model it is equal to 0).

Here is the code that runs the simulation. Because each step of the procedure is lengthily I parallelize the computations.

library(parallel)

run <- function(job) {
    require(lasso2)

    gen.data <- function(v, n) {
        data.set <- data.frame(replicate(v, runif(n)))
        # true y is equal to sum of x
        data.set$y <- rowSums(data.set)
        names(data.set) <- c(paste("x", 1:v, sep = ""), "y")
        return(data.set)
    }

    v <- 50
    n <- 60

    data.set <- gen.data(v, n)
    # add noise to y in training set
    data.set$y <- data.set$y + rnorm(n)
    new.set <- gen.data(v, 10000)
    model.lm <- lm(y ~ ., data.set)
    model.aic <- step(model.lm, trace = 0)
    model.bic = step(model.lm, trace = 0, k = log(n))
    model.lasso <- l1ce(y ~ ., data.set,
                        sweep.out = NULL, standardize = FALSE)
    models = list(model.lm, model.aic, model.bic, model.lasso)
    results <- numeric(length(models))
    for (j in seq_along(models)) {
        pred <- predict(models[[j]], newdata = new.set)
        results[j] <- mean((pred - new.set$y) ^ 2)
    }
    return(results)
}
cl <- makeCluster(4)
system.time(msd <- t(parSapply(cl, 1:100, run))) # 58.07 seconds
stopCluster(cl)

colnames(msd) = c("lm", "aic", "bic", "lasso")
par(mar = c(2, 2, 1, 1))
boxplot(msd)
for (i in 1:ncol(msd)) {
    lines(c(i - 0.4, i + 0.4), rep(mean(msd[, i]), 2),
          col = "red", lwd = 2)

}

The code produces boxplots for distribution of mean squared deviation from theoretical mean and additionally puts red line at the mean level of mean squared deviation. Here is the result:


Notice that in this example neither AIC not BIC improve over linear regression with all variables. However Lasso consistently produces significantly better models.

1 comment:

  1. The LASSO is really great. It does, however, suffer from a few limitations. For example, when a subset of the covariates in the true model are highly correlated, the LASSO is prone to including one of the highly correlated covariates and ignoring the rest. More recent penalized regression models, such as Zhang's MC+ (a variant of which is available on CRAN as SparseNet) are more better behaved.

    ReplyDelete

Note: Only a member of this blog may post a comment.