--- title: "Using the bestNormalize Package" author: "Ryan A Peterson" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{bestNormalize} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, fig.height = 5, fig.width = 7) library(bestNormalize) ``` # Introduction The `bestNormalize` package contains a suite of transformation-estimating functions that can be used to normalize data. The function of the same name attempts to find and execute the best of all of these potential normalizing transformations. In this package, we define "normalize" as in "to render data Gaussian", rather than transform it to the 0-1 scale. There are many instances where researchers may want to normalize a variable: 1) The (often problematic) assumption of normality of the outcome (conditional on the covariates) in the classical linear regression problem. Over the years many methods have been used to relax this assumption: generalized linear models, quantile regression, survival models, etc. One technique that is still somewhat popular in this context is to "beat the data" to look normal via some kind of normalizing transformation. This could be something as simple as a log transformation, or something as complex as a Yeo-Johnson transformation. While perhaps not the most elegant solution the problem, often this technique works well as a quick and dirty solution. 2) Another increasingly popular application of normalization occurs in applied regression settings with highly skewed distributions of the covariates. In these settings, there exists the tendency to have high leverage points (and highly influential points), even when one centers and scales the covariates. When examining interactions, these influential points can become especially problematic since the leverage of that point is amplified for every child interaction of which it is a parent. Normalization of the covariates mitigates the leverage and influence of these covariates, which allows for easier model selection. As a result, popular model selection packages such as `caret` and `recipes` have built-in mechanisms to normalize the predictor variables (they call this "preprocessing"). This concept is unique in that it forgoes the assumption of linearity between the outcome and the covariate, opting instead for a linear relationship between Y and the transformed value of the covariate (which in many cases may be more plausible). This package is designed to make this normalization transformation as effortless and consistent as possible. This package also introduces Ordered Quantile (ORQ) normalization via `orderNorm`, a normalization technique based off of a rank mapping to the normal distribution, which *guarantees* normally distributed transformed data (if ties are not present). We show in the accompanying paper that ORQ normalization performs very consistently across different distributions. It is able to successfully transform left/right skewed data, multimodal data, and even data generated from a Cauchy distribution (efficacy of transformation was tested on out-of-sample data). The transformations contained in this package and implemented in `bestNormalize` are reversible (i.e., 1-1), which allows for straight-forward interpretation and consistency. In other words, any analysis performed on the normalized data can be interpreted using the original unit (see application). # Methods There are several normalization transformation options, each with their own implementations and limitations. While some of these methods are implemented well in other R packages, the `bestNormalize` package puts them all under the same umbrella syntax that makes them easy to apply in a wide range of situations. ## The Lambert W x F transformation The Lambert W x F transformation, proposed by Goerg and implemented in the `LambertW` package, is essentially a mechanism that de-skews a random variable $X$ using moments. The method is motivated by a system theory, and is alleged to be able to transform any random variable into any other kind of random variable, thus being applicable to a large number of cases. One of the package's main functions is `Gaussianize`, which is similar in spirit to the purpose of this package. However, I have found in practice that often times, this method does not perform as well as the Box Cox or the Yeo-Johnson transformation. This transformations can handle three types of transformations: skewed, heavy-tailed, and skewed heavy-tailed. For more details on this transformation, consult the documentation for the `Lambert W` package. ## The Box Cox transformation The Box Cox transformation, proposed by Box and Cox in their famous 1964 paper and implemented with differing syntax and methods in many packages in R (see `caret`, `MASS`, `forecast`), is a straightforward transformation that typically only involves one parameter, $\lambda$: $$ g(x; \lambda) = \boldsymbol 1 _{(\lambda \neq 0)} \frac{x^\lambda-1}{\lambda} + \boldsymbol 1_{(\lambda = 0)} \log x $$ Where $x$ refers to the datum in the original unit (pre-transformation). The $\lambda$ parameter can be estimated via maximum likelihood. ## The Yeo-Johnson transformation The Yeo-Johnson transformation, proposed by Yeo and Johnson in 2000, attempts to find the value of lambda (in the following equation) that minimizes the Kullback-Leibler distance between the normal distribution and the transformed distribution. $$ \begin{aligned} g(x;\lambda) &= \boldsymbol 1 _{(\lambda \neq 0, x \geq 0)} \frac{(x+1)^\lambda-1}{\lambda} \\ &+ \boldsymbol 1_{(\lambda = 0, x \geq 0)} \log (x+1) \\ &+ \boldsymbol 1_{(\lambda \neq 2, x < 0)} \frac{(1-x)^{2-\lambda}-1}{\lambda - 2} \\ &+ \boldsymbol 1_{(\lambda = 2, x < 0)} -\log (1-x) \\ \end{aligned} $$ This method has the advantage of working without having to worry about the domain of $x$. As with the Box-Cox $\lambda$, this $\lambda$ parameter can be estimated via maximum likelihood. ## The Ordered Quantile technique The ORQ normalization technique is based on the following transformation (originally discussed, as far as I can find, by Bartlett in 1947 and further developed in 1952 by Van der Waerden in "Order tests for the two sample problem and their power"): Let $x$ refer to the original data. Then the transformation is: $$ g(x) = \Phi ^{-1} \left(\frac{\text{rank} (x) - 1/2}{\text{length}(x) }\right) $$ ORQ normalization, while based on this transformation, is a little more complicated to implement in many settings where the transformation needs to be applied on new data. On new data *within* the range of the original data, this transformation refers to the linear interpolation between two of the original data points. On new data *outside* the range of the original data, the transformation returns a warning and extrapolates using a shifted logit approximation of the ranks to the original data. This is visualized below via the `iris` data set, on the `Petal.Width` variable. ```{r orq_vis, echo = FALSE} x <- iris$Petal.Width on <- orderNorm(x, warn = FALSE) xx <- seq(min(x) - 1, max(x) + 1, length = 1000) yy <- suppressWarnings(predict(on, xx)) r <- ((rank(x) - .5) / (length(x))) f <- suppressWarnings(glm(r ~ x, family = "binomial")) p <- qnorm(predict(f, newdata = data.frame(x = xx), type = 'response')) plot(x, on$x.t, pch = 20, xlim = range(xx), ylim = range(p, yy), main = "ORQ transformation", xlab = "Original Value", ylab = "Transformed Value") lines(xx, p, col = '1', lwd = 1, lty =2) lines(xx, yy, col = 'slateblue', lwd = 2) # Add legend legend('bottomright', c('Original data', 'Tranformed values for new data', 'Approximation for Extrapolation'), bty = 'n', lty = c(0, 1, 2), lwd = c(0, 2,1), pch = c(20, NA, NA), col = c(1, "slateblue", 1, 1)) ``` The reason for the shifted logit extrapolation is that it ensures that the function is 1-1, and can handle data outside the original (observed) domain. This approximation will usually be relatively minimal since we should not expect to see many observations outside the observed range if the sample size is large enough. The ORQ technique will not guarantee a normal distribution in the presence of ties, but it still could yield the best normalizing transformation when compared to the other possible approaches. ## Other included transformations In addition to the techniques above, the `bestNormalize` package performs and evaluates: - $\log_b(x + a)$ with $a = \max(0, -\min(x) + \epsilon)$ by default - $\sqrt{x + a}$ with $a = \max(0, -\min(x) + \epsilon)$ by default - $\exp(x)$ - $\text {arcsinh}(x) = log(x + \sqrt{x^2 + 1})$ ## Other not-included transformations There have been a range of other normalization techniques discussed since the original Box-Cox paper that are not included in this package (at the time of writing). Many of these transformations have their own strengths and weaknesses. These include (but are not limited to): Modified Box Cox (1964), Manly's Exponential (1976), John/Draper's Modulus (1980), Bickel/Doksum's Modified Box Cox (1981). The framework of this package is to create an S3 class for each transformation, so the addition of other normalization techniques would be easy extensions of this package (readers can feel free to submit a pull request to this package's GitHub page with new transformation techniques if they feel so inclined). The `bestNormalize` package does also include a function to perform a binarizing transformation. This is provided as a potential "last resort" if a vector is really unable to be transformed to a normally distributed variable. In cases when a user is automatically normalizing covariates, this is useful when they may accidentally try to normalize a vector with not enough unique values. ## Selecting the best technique The `bestNormalize` function selects the best transformation according to the Pearson P statistic (divided by its degrees of freedom), as calculated by the `nortest` package. There are a variety of normality tests out there, but the benefit of the Pearson P / df is that it is a relatively interpretable goodness of fit test, and the ratio P / df can be compared between transformations as an absolute measure of the departure from normality (if the data follows close to a normal distribution, this ratio will be close to 1). The transformation whose transformed values fit normality the closest according to this statistic (or equivalently, this ratio), is selected by `bestNormalize`. The ratios are printed when the object is printed, see examples in the next section. The ORQ technique is destined to win by this (in-sample) metric, since it is forcing the transformed data to follow a normal distribution. For this reason, by default the `bestNormalize` function calculates an *out-of-sample* estimate for the P / df statistic. Since this method requires repeated cross-validation, it can be a little bit problematic: 1) The results and the chosen transformation can depend on the seed, 2) It takes considerably longer to estimate than the in-sample statistic, and 3) It is unclear how to choose the number of folds and repeats To respond to these issues, (1) is only important for small sample sizes, and luckily the transformations chosen in cases where it's a close call will look very similar to one another, so it shouldn't make a practical difference. For (2), we have allowed for parallelization to slightly speed up this process, however there could be better ways of speeding this up. For (3), we recommend 10-fold cross-validation with 5 repeats as the default, but if the sample is small, it could be worth going to 5 folds instead with more repeats; accurate estimation of P/df requires a relatively large fold size (as a rule of thumb, 20 observations per fold seems to be enough for most cases, but this can vary of course depending on the distribution). If the user specifies `out_of_sample = FALSE`, this will highly speed up the process as the function does not perform repeated CV. However the ORQ normalization will virtually always be chosen. If this method is chosen, we recommend also setting `allow_orderNorm = FALSE` in order to pick the best of the other transformations, otherwise the user may as well choose `orderNorm` to begin with. The parallelization for `bestNormalize` is performed via the `parallel` package, and implemented by using the `cluster` argument. Users should create a cluster using `makeCluster`, then pass that cluster into the `bestNormalize` function (see Parallel in "Examples" section). The amount that this speeds up the estimation of out-of-sample estimates depends on the number of repeats, the number of cores, and the sample size of the vector to be normalized. The plot below shows the estimation time for 10-fold cross-validation with 15 repeats on a gamma distributed random variable with various sample sizes and cores. Note that the estimates below do not count the time it takes to call `makeCluster` or `stopCluster`. Evidently, the speed-up occurs past a few thousand observations for 15 repeats. ```{r par_vis, echo = FALSE, out.width = "75%"} knitr::include_graphics("parallel_timings.jpg") ``` # Examples In this section, I provide some code that performs each of the transformations described in the prior section. ## Basic Implementation ```{r gen_data} # Generate some data set.seed(100) x <- rgamma(250, 1, 1) MASS::truehist(x, nbins = 12) ``` This data is clearly not normal. Let's use the `bestNormalize` functionality to perform a suite of potential transformations, and see how each method performs. ```{r vis_code} # Perform some tranformations individually # arcsinh transformation (arcsinh_obj <- arcsinh_x(x)) # Box Cox's Transformation (boxcox_obj <- boxcox(x)) # Yeo-Johnson's Transformation (yeojohnson_obj <- yeojohnson(x)) # orderNorm Transformation (orderNorm_obj <- orderNorm(x)) # Pick the best one automatically (BNobject <- bestNormalize(x)) # Last resort - binarize (binarize_obj <- binarize(x)) ``` These objects can then be fed into the `predict` function to perform the transformation on new values. The reverse transformation is also possible with this function. Below we plot the transformation for a range of new x values ```{r vis_data} xx <- seq(min(x), max(x), length = 100) plot(xx, predict(arcsinh_obj, newdata = xx), type = "l", col = 1, ylim = c(-4, 4), xlab = 'x', ylab = "g(x)") lines(xx, predict(boxcox_obj, newdata = xx), col = 2) lines(xx, predict(yeojohnson_obj, newdata = xx), col = 3) lines(xx, predict(orderNorm_obj, newdata = xx), col = 4) legend("bottomright", legend = c("arcsinh", "Box Cox", "Yeo-Johnson", "OrderNorm"), col = 1:4, lty = 1, bty = 'n') ``` To examine how each of them performed (In-Sample), we can visualize the transformed values in a histogram. ```{r hist_trans, fig.height=8, fig.width = 7} par(mfrow = c(2,2)) MASS::truehist(arcsinh_obj$x.t, main = "Arcsinh transformation", nbins = 12) MASS::truehist(boxcox_obj$x.t, main = "Box Cox transformation", nbins = 12) MASS::truehist(yeojohnson_obj$x.t, main = "Yeo-Johnson transformation", nbins = 12) MASS::truehist(orderNorm_obj$x.t, main = "orderNorm transformation", nbins = 12) ``` The best transformation in this case is plotted below. ```{r hist_best} par(mfrow = c(1,2)) MASS::truehist(BNobject$x.t, main = paste("Best Transformation:", class(BNobject$chosen_transform)[1]), nbins = 12) plot(xx, predict(BNobject, newdata = xx), type = "l", col = 1, main = "Best Normalizing transformation", ylab = "g(x)", xlab = "x") ``` ## Visualizing out-of-sample estimates of normality The `bestNormalize` function will save the estimated normality statistics that it obtained for each fold and repeat of cross-validation. Users can visualize these relatively easily via a boxplot (see below), which may give some insight into how much the transformation is truly preferred by the normality statistic, or if it doesn't matter so much as much which transformation is chosen. In this example, Box-Cox, Arcsin h, and ORQ seem to be doing similarly to each other, whereas the Yeo-Johnson is under-performing. ```{r boxplot, fig.width=10, out.width="100%"} boxplot(log10(BNobject$oos_preds), yaxt = 'n') axis(2, at=log10(c(.1,.5, 1, 2, 5, 10)), labels=c(.1,.5, 1, 2, 5, 10)) ``` ## Improving speed of estimation The `bestNormalize` function can be quite slow, since it's using cross validation to estimate the out-of-sample efficacy. If the user specifies `out_of_sample = FALSE"`, this will highly speed up the process, however the ORQ normalization will virtually always be chosen. If this method is chosen, we recommend setting `allow_orderNorm = FALSE` in order to pick the best of the other transformations. ```{r bn_output} bestNormalize(x, allow_orderNorm = FALSE, out_of_sample = FALSE) ``` We also make it easy for users to parallelize the repeated CV process via the `cluster` argument and the `parallel` package. A cluster can be set up with `makeCluster` and passed to `bestNormalize` via the `cluster = ` argument. This speed-up is dependent on the number of repeats, the length of the vector to be transformed, and the number of cores in the cluster. It could well make the computation slower due to the overhead computations of setting up the process, so in cases where the sample size is relatively small and the number of repeats is also small, it is probably better to not worry about the parallelization. ## Leave-one-out CV As of version 1.3, `bestNormalize` has the `loo` argument, which if set to `TRUE` will compute the leave-one-out cross-validation transformations for each observation and method. Instead of the out-of-sample estimates for normality being the mean of the values over repeated folds, now the estimate for normality is simply calculated for the distribution of leave-one-out transformed values. This is computationally difficult, so it also works with the `cluster` argument if desired. ## Visualizing transformations in `ggplot2` with `scales` package As of version 1.7, users can use the `scales` package in conjunction with `ggplot2` to facilitate plotting variables that have undergone a transformation with `bestNormalize` (or any other method). ```{r scales_ex} library(ggplot2) x <- rgamma(1000, 1, .1) bn <- bestNormalize(x) bn # say y is related linearly to the transformed x y <- bn$x.t * 1 + rnorm(1000) # A log transformation does OK... ggplot(data.frame(x=x,y=y), aes(x, y)) + geom_point() + scale_x_continuous(trans = "log", breaks = scales::log_breaks()) # Create bestNormalize scale for use in ggplot (using bestNormalize object) bn_trans <- scales::trans_new( name = "bn_trans", trans = function(x) predict(bn, newdata = x), inverse = function(x) predict(bn, newdata = x, inverse = TRUE) ) ggplot(data.frame(x=x,y=y), aes(x, y)) + geom_point() + scale_x_continuous(trans = bn_trans) ``` # Application to `autotrader` data The `autotrader` data set was scraped from the [autotrader website](https://www.autotrader.com/) as part of this package (and because at the time of writing, I needed to buy a car). I apply the `bestNormalize` functionality to de-skew mileage, age, and price in my pricing model. See `?autotrader` for more information on this data set. ```{r load_appdata} data("autotrader") autotrader$yearsold <- 2017 - autotrader$Year ### Using best-normalize (priceBN <- bestNormalize(autotrader$price, r = 1, k = 5)) ``` Note that `orderNorm` throws a warning due to the ties in the price vector. We can see that the estimated normality statistic for the OrderNorm transformation is close to one, however, so we know it is performing quite well. It is also performing better than all of the other transformations. ```{r bn_mileage} (mileageBN <- bestNormalize(autotrader$mileage, r = 1, k = 5)) ``` Similarly, the ORQ normalization performed best for mileage, and the ties didn't seem to be too big of a deal. ```{r bn_yearsold} (yearsoldBN <- bestNormalize(autotrader$yearsold, r = 1, k = 5)) ``` Here we see something peculiar; none of the normalizing transformations performed well according to the normality statistics. The frequency of ties in this case makes it very difficult to find a normalizing transformation (see figure below as to why this is). Even so, orderNorm is chosen as it has the lowest estimated P/df statistic. ```{r hist_app, fig.height=8, fig.width=7} par(mfrow = c(3, 2)) MASS::truehist(autotrader$price) MASS::truehist(priceBN$x.t) MASS::truehist(autotrader$mileage) MASS::truehist(mileageBN$x.t) MASS::truehist(autotrader$yearsold) MASS::truehist(yearsoldBN$x.t) ``` ```{r hist_app2} par(mfrow = c(2, 2)) price.xx <- seq(min(autotrader$price), max(autotrader$price), length = 100) mileage.xx <- seq(min(autotrader$mileage), max(autotrader$mileage), length = 100) yearsold.xx <- seq(min(autotrader$yearsold), max(autotrader$yearsold), length = 100) plot(price.xx, predict(priceBN, newdata = price.xx), type = "l", main = "Price bestNormalizing transformation", xlab = "Price ($)", ylab = "g(price)") plot(mileage.xx, predict(mileageBN, newdata = mileage.xx), type = "l", main = "Mileage bestNormalizing transformation", xlab = "Mileage", ylab = "g(Mileage)") plot(yearsold.xx, predict(yearsoldBN, newdata = yearsold.xx), type = "l", main = "Years-old bestNormalizing transformation", xlab = "Years-old", ylab = "g(Years-old)") ``` Next, we will fit a linear model on the transformed values of each variable. The reverse-transformation will allow us to visualize how these variables work into the model on their original units. For the sake of illustration, we also plot the effects of a GAM model to see if there is a big difference. The GAM models seem to be more influenced by outliers than the fits to the transformed data. Only 1000 points are plotted to reduce plot memory footprint. ```{r app_vis} autotrader$price.t <- priceBN$x.t autotrader$mileage.t <- mileageBN$x.t autotrader$yearsold.t <- yearsoldBN$x.t fit4 <- lm(price.t ~ mileage.t + yearsold.t, data = autotrader) summary(fit4) miles.t <- predict(mileageBN, newdata = mileage.xx) c1 <- coef(fit4)["mileage.t"] plot_idx <- sample(1:nrow(autotrader), size = 1000) par(mfrow = c(1, 1)) plot( mileageBN$x.t[plot_idx], priceBN$x.t[plot_idx], pch = 16, col = grey(.1, alpha = .2), main = "Estimated linear effect (using transformed data)", xlab = "g(Mileage)", ylab = "g(Price)" ) lines(miles.t, coef(fit4)[1] + c1 * miles.t, col = "slateblue", lwd = 2) ## Mileage effect plot( autotrader$mileage[plot_idx], autotrader$price[plot_idx], pch = 16, col = grey(.1, alpha = .2), main = "Mileage effect (re-transformed to original unit)", xlab = "Mileage", ylab = "Price" ) line_vals <- miles.t * c1 + coef(fit4)[1] lines( mileage.xx, y = predict(priceBN, newdata = line_vals, inverse = TRUE), lwd = 2, col = "slateblue" ) # Compare to GAM fit fit_gam <- mgcv::gam(price ~ s(yearsold) + s(mileage), data = autotrader) p_gam <- predict(fit_gam, newdata = data.frame(yearsold = mean(autotrader$yearsold), mileage = mileage.xx)) lines(mileage.xx, p_gam, lwd = 2, col = 'green3') legend( 'topright', c("GAM fit", "Transformed linear fit"), lwd = 2, col = c("green3", "slateblue"), bty = "n" ) ## Years Old effect yo.t <- predict(yearsoldBN, newdata = yearsold.xx) c2 <- coef(fit4)["yearsold.t"] plot( jitter(autotrader$yearsold[plot_idx], 1.5), autotrader$price[plot_idx], pch = 16, col = grey(.1, alpha = .2), main = "Years old effect (re-transformed to original unit)", xlab = "Age (Jittered)", ylab = "Price" ) line_vals <- yo.t * c2 + coef(fit4)[1] lines( yearsold.xx, y = predict(priceBN, newdata = line_vals, inverse = TRUE), lwd = 2, col = "slateblue" ) # Compare to GAM fit p_gam <- predict(fit_gam, newdata = data.frame(yearsold = yearsold.xx, mileage = mean(autotrader$mileage))) lines(yearsold.xx, p_gam, lwd = 2, col = 'green3') legend( 'topright', c("GAM fit", "Transformed linear fit"), lwd = 2, col = c("green3", "slateblue"), bty = "n" ) ```