Yet Another Blog in Statistical Computing

I can calculate the motion of heavenly bodies but not the madness of people. -Isaac Newton

Row Search in Parallel

I’ve been always wondering whether the efficiency of row search can be improved if the whole data.frame is splitted into chunks and then the row search is conducted within each chunk in parallel.

In the R code below, a comparison is done between the standard row search and the parallel row search with the FOREACH package. The result is very encouraging. For 10 replications, the elapsed time of parallel search is only the fraction of the elapsed time of standard search.

load('2008.Rdata')
data2 <- split(data, 1:8)

library(rbenchmark)
library(doParallel)
registerDoParallel(cores = 8)
library(foreach)

benchmark(replications = 10, order = "elapsed",
  non_parallel = {
    test1 <- data[which(data$ArrTime == 1500 & data$Origin == 'ABE'), ]
  },
  parallel = {
    test2 <- foreach(i = data2, .combine = rbind) %dopar% i[which(i$ArrTime == 1500 & i$Origin == 'ABE'), ]
  }
)
#           test replications elapsed relative user.self sys.self user.child
# 2     parallel           10   2.680    1.000     0.319    0.762     12.078
# 1 non_parallel           10   7.474    2.789     7.339    0.139      0.000
Advertisements

Written by statcompute

September 28, 2014 at 5:41 pm

Posted in Big Data, S+/R

Tagged with ,

%d bloggers like this: