Yet Another Blog in Statistical Computing

I can calculate the motion of heavenly bodies but not the madness of people. -Isaac Newton

Estimating a Beta Regression with The Variable Dispersion in R

pkgs <- c('sas7bdat', 'betareg', 'lmtest')
lapply(pkgs, require, character.only = T)

df1 <- read.sas7bdat("lgd.sas7bdat")
df2 <- df1[which(df1$y < 1), ]

xvar <- paste("x", 1:7, sep = '', collapse = " + ")
fml1 <- as.formula(paste("y ~ ", xvar))
fml2 <- as.formula(paste("y ~ ", xvar, "|", xvar))

# FIT A BETA MODEL WITH THE FIXED PHI
beta1 <- betareg(fml1, data = df2)
summary(beta1)

# Coefficients (mean model with logit link):
#              Estimate Std. Error z value Pr(>|z|)    
# (Intercept) -1.500242   0.329670  -4.551 5.35e-06 ***
# x1           0.007516   0.026020   0.289 0.772680    
# x2           0.429756   0.135899   3.162 0.001565 ** 
# x3           0.099202   0.022285   4.452 8.53e-06 ***
# x4           2.465055   0.415657   5.931 3.02e-09 ***
# x5          -0.003687   0.001070  -3.446 0.000568 ***
# x6           0.007181   0.001821   3.943 8.06e-05 ***
# x7           0.128796   0.186715   0.690 0.490319    
#
# Phi coefficients (precision model with identity link):
#       Estimate Std. Error z value Pr(>|z|)    
# (phi)   3.6868     0.1421   25.95   <2e-16 ***

# FIT A BETA MODEL WITH THE VARIABLE PHI
beta2 <- betareg(fml2, data = df2)
summary(beta2)

# Coefficients (mean model with logit link):
#              Estimate Std. Error z value Pr(>|z|)    
# (Intercept) -1.996661   0.336445  -5.935 2.95e-09 ***
# x1           0.007033   0.026809   0.262 0.793072    
# x2           0.371098   0.135186   2.745 0.006049 ** 
# x3           0.133356   0.022624   5.894 3.76e-09 ***
# x4           2.951245   0.401493   7.351 1.97e-13 ***
# x5          -0.003475   0.001119  -3.105 0.001902 ** 
# x6           0.006528   0.001884   3.466 0.000529 ***
# x7           0.100274   0.190915   0.525 0.599424    
#
# Phi coefficients (precision model with log link):
#              Estimate Std. Error z value Pr(>|z|)    
# (Intercept) -0.454399   0.452302  -1.005 0.315072    
# x1           0.009119   0.035659   0.256 0.798150    
# x2           0.611049   0.188225   3.246 0.001169 ** 
# x3           0.092073   0.030678   3.001 0.002689 ** 
# x4           2.248399   0.579440   3.880 0.000104 ***
# x5          -0.002188   0.001455  -1.504 0.132704    
# x6          -0.000317   0.002519  -0.126 0.899847    
# x7          -0.166226   0.256199  -0.649 0.516457    

# LIKELIHOOD RATIO TEST TO COMPARE BOTH BETA MODELS
lrtest(beta1, beta2) 

# Likelihood ratio test
#
# Model 1: y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7
# Model 2: y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 | x1 + x2 + x3 + x4 + x5 + x6 + x7
#   #Df LogLik Df Chisq Pr(>Chisq)    
# 1   9 231.55                        
# 2  16 257.24  7 51.38  7.735e-09 ***

Written by statcompute

October 19, 2014 at 5:25 pm

Posted in Econometrics, S+/R, Statistical Models

Tagged with ,

Julia: Function vs. Macro

using DataFrames, Benchmark

# FUNCTION TO READ CSV FILE TO DATAFRAME
function csv2df(file)
  d = readcsv(file, header = true)
  head = d[2]
  data = d[1]
  dt = DataFrame()
  for i in 1:size(head)[2]
    dt[symbol(head[i])] = data[:, i]
  end
  return dt
end

df_funct() = csv2df("survdt.csv");

# MACRO TO READ CSV FILE TO DATAFRAME
macro csv2df(file)
  d = readcsv(file, header = true)
  head = d[2]
  data = d[1]
  dt = DataFrame()
  for i in 1:size(head)[2]
    dt[symbol(head[i])] = data[:, i]
  end
  return dt
end

df_macro() = @csv2df "survdt.csv";

# FUNCTION IN THE DATAFRAME PACKAGE
df_frame() = readtable("survdt.csv");

print(compare([df_funct, df_macro, df_frame], 10))

# | Row | Function   | Average  | Relative  | Replications |
# |-----|------------|----------|-----------|--------------|
# | 1   | "df_funct" | 0.164968 | 4.07328e6 | 10           |
# | 2   | "df_macro" | 4.05e-8  | 1.0       | 10           |
# | 3   | "df_frame" | 0.063254 | 1.56183e6 | 10           |

Written by statcompute

October 10, 2014 at 1:18 am

Posted in Julia

Tagged with

A Learning Section for Julia – Recoding A String Vector to A Integer Vector

julia> using RDatasets, DataFrames

julia> data = dataset("datasets", "iris");

julia> species = array(data[:, 5]);

julia> x = int([i == "setosa" for i in species]);

julia> sum(x)
50

Written by statcompute

October 7, 2014 at 11:34 pm

Posted in Julia

Tagged with

Fitting Lasso with Julia

Julia Code

using RDatasets, DataFrames, GLMNet

data = dataset("MASS", "Boston");
y = array(data[:, 14]);
x = array(data[:, 1:13]);

cv = glmnetcv(x, y);
cv.path.betas[:, indmin(cv.meanloss)];
result = DataFrame();
result[:Vars] = names(data)[1:13];
result[:Beta] = cv.path.betas[:, indmin(cv.meanloss)];
result

# | Row | Vars    | Beta       |
# |-----|---------|------------|
# | 1   | Crim    | -0.0983463 |
# | 2   | Zn      | 0.0414416  |
# | 3   | Indus   | 0.0        |
# | 4   | Chas    | 2.68519    |
# | 5   | NOx     | -16.3066   |
# | 6   | Rm      | 3.86694    |
# | 7   | Age     | 0.0        |
# | 8   | Dis     | -1.39602   |
# | 9   | Rad     | 0.252687   |
# | 10  | Tax     | -0.0098268 |
# | 11  | PTRatio | -0.929989  |
# | 12  | Black   | 0.00902588 |
# | 13  | LStat   | -0.5225    |

R Code

library(glmnet)
data(Boston, package = "MASS")

x <- as.matrix(Boston[, 1:13])
y <- as.matrix(Boston[, 14])

cv <- cv.glmnet(x, y, nfolds = 10) 	
mdl <- glmnet(x, y, lambda = cv$lambda.min)
mdl$beta

# crim     -0.098693203
# zn        0.041588291
# indus     .          
# chas      2.681633344
# nox     -16.354590598
# rm        3.860035926
# age       .          
# dis      -1.399697121
# rad       0.255484621
# tax      -0.009935509
# ptratio  -0.931031828
# black     0.009031422
# lstat    -0.522741592

Written by statcompute

October 7, 2014 at 11:09 pm

By-Group Aggregation in Parallel

Similar to the row search, by-group aggregation is another perfect use case to demonstrate the power of split-and-conquer with parallelism.

In the example below, it is shown that the homebrew by-group aggregation with foreach pakage, albeit inefficiently coded, is still a lot faster than the summarize() function in Hmisc package.

load('2008.Rdata')

pkgs <- c('rbenchmark', 'doParallel', 'foreach', 'Hmisc')
lapply(pkgs, require, character.only = T)
registerDoParallel(cores = 8)

benchmark(replications = 10,
  summarize = {
    summarize(data[c("Distance", "Month")], data["Month"], colMeans, stat.name = NULL)
  },
  foreach = {
    data2 <- split(data, data$Month)
    test2 <- foreach(i = data2, .combine = rbind) %dopar% (data.frame(Month = unique(i$Month), Distance= mean(i$Distance)))
  }
)

#        test replications elapsed relative user.self sys.self user.child
# 2   foreach           10  19.644     1.00    17.411    1.965      1.528
# 1 summarize           10  30.244     1.54    29.822    0.441      0.000

Written by statcompute

October 4, 2014 at 11:41 pm

Posted in Big Data, S+/R

Tagged with , ,

Vector Search vs. Binary Search

# REFERENCE:
# user2014.stat.ucla.edu/files/tutorial_Matt.pdf

pkgs <- c('data.table', 'rbenchmark')
lapply(pkgs, require, character.only = T)
 
load('2008.Rdata')
dt <- data.table(data)

benchmark(replications = 10, order = "elapsed",
  vector_search = {
    test1 <- dt[ArrTime == 1500 & Origin == 'ABE', ] 
  },
  binary_search = {
    setkey(dt, ArrTime, Origin)
    test2 <- dt[.(1500, 'ABE'), ]
  }
)

#            test replications elapsed relative user.self sys.self user.child
# 2 binary_search           10   0.335    1.000     0.311    0.023          0
# 1 vector_search           10   7.245   21.627     7.102    0.131          0

Written by statcompute

October 1, 2014 at 10:27 pm

Posted in Big Data, S+/R

Tagged with ,

Row Search in Parallel

I’ve been always wondering whether the efficiency of row search can be improved if the whole data.frame is splitted into chunks and then the row search is conducted within each chunk in parallel.

In the R code below, a comparison is done between the standard row search and the parallel row search with the FOREACH package. The result is very encouraging. For 10 replications, the elapsed time of parallel search is only the fraction of the elapsed time of standard search.

load('2008.Rdata')
data2 <- split(data, 1:8)

library(rbenchmark)
library(doParallel)
registerDoParallel(cores = 8)
library(foreach)

benchmark(replications = 10, order = "elapsed",
  non_parallel = {
    test1 <- data[which(data$ArrTime == 1500 & data$Origin == 'ABE'), ]
  },
  parallel = {
    test2 <- foreach(i = data2, .combine = rbind) %dopar% i[which(i$ArrTime == 1500 & i$Origin == 'ABE'), ]
  }
)
#           test replications elapsed relative user.self sys.self user.child
# 2     parallel           10   2.680    1.000     0.319    0.762     12.078
# 1 non_parallel           10   7.474    2.789     7.339    0.139      0.000

Written by statcompute

September 28, 2014 at 5:41 pm

Posted in Big Data, S+/R

Tagged with ,

Follow

Get every new post delivered to your Inbox.

Join 76 other followers