Programming with Big Data in R
   HOME

TheInfoList



OR:

Programming with Big Data in R (pbdR) is a series of R packages and an environment for statistical computing with big data by using high-performance statistical computation. The pbdR uses the same programming language as R with S3/S4 classes and methods which is used among statisticians and data miners for developing statistical software. The significant difference between pbdR and R code is that pbdR mainly focuses on
distributed memory In computer science, distributed memory refers to a multiprocessor computer system in which each processor has its own private memory. Computational tasks can only operate on local data, and if remote data are required, the computational task mu ...
systems, where data are distributed across several processors and analyzed in a
batch mode Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically ...
, while communications between processors are based on MPI that is easily used in large high-performance computing (HPC) systems. R system mainly focuses on single multi-core machines for data analysis via an interactive mode such as GUI interface. Two main implementations in R using MPI are Rmpi and pbdMPI of pbdR. * The pbdR built on pbdMPI uses SPMD parallelism where every processor is considered as worker and owns parts of data. The SPMD parallelism introduced in mid 1980 is particularly efficient in homogeneous computing environments for large data, for example, performing
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
on a large matrix, or performing clustering analysis on high-dimensional large data. On the other hand, there is no restriction to use manager/workers parallelism in SPMD parallelism environment. * The Rmpi uses manager/workers parallelism where one main processor (manager) serves as the control of all other processors (workers). The manager/workers parallelism introduced around early 2000 is particularly efficient for large tasks in small clusters, for example, bootstrap method and Monte Carlo simulation in applied statistics since
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
assumption is commonly used in most statistical analysis. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments. The idea of SPMD parallelism is to let every processor do the same amount of work, but on different parts of a large data set. For example, a modern
GPU A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
is a large collection of slower co-processors that can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up with an efficient way to obtain final solutions (i.e. time to solution is shorter).


Package design

Programming with pbdR requires usage of various packages developed by pbdR core team. Packages developed are the following. Among these packages, pbdMPI provides wrapper functions to MPI library, and it also produces a
shared library In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subro ...
and a configuration file for MPI environments. All other packages rely on this configuration for installation and library loading that avoids difficulty of library linking and compiling. All other packages can directly use MPI functions easily. * pbdMPI --- an efficient interface to MPI either
OpenMPI Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was t ...
or MPICH2 with a focus on Single Program/Multiple Data (
SPMD In computing, single program, multiple data (SPMD) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results fast ...
) parallel programming style * pbdSLAP --- bundles scalable dense linear algebra libraries in double precision for R, based on
ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interproc ...
version 2.0.2 which includes several scalable linear algebra packages (namely BLACS, PBLAS, and
ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interproc ...
). * pbdNCDF4 --- interface to Parallel Unidata
NetCDF NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidat ...
4 format data files * pbdBASE --- low-level
ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interproc ...
codes and wrappers * pbdDMAT --- distributed matrix classes and computational methods, with a focus on linear algebra and statistics * pbdDEMO --- set of package demonstrations and examples, and this unifying vignette * pmclust --- parallel model-based clustering using pbdR * pbdPROF --- profiling package for MPI codes and visualization of parsed stats * pbdZMQ --- interface to
ØMQ ZeroMQ (also spelled ØMQ, 0MQ or ZMQ) is an asynchronous message passing, messaging library, aimed at use in distributed application, distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ...
* remoter --- R client with remote R servers * pbdCS --- pbdR client with remote pbdR servers * pbdRPC --- remote procedure call * kazaam --- very tall and skinny distributed matrices * pbdML --- machine learning toolbox Among those packages, the pbdDEMO package is a collection of 20+ package demos which offer example uses of the various pbdR packages, and contains a vignette that offers detailed explanations for the demos and provides some mathematical or statistical insight.


Examples


Example 1

Hello World! Save the following code in a file called "demo.r" ### Initial MPI library(pbdMPI, quiet = TRUE) init() comm.cat("Hello World!\n") ### Finish finalize() and use the command mpiexec -np 2 Rscript demo.r to execute the code where Rscript is one of command line executable program.


Example 2

The following example modified from pbdMPI illustrates the basic syntax of the language of pbdR. Since pbdR is designed in
SPMD In computing, single program, multiple data (SPMD) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results fast ...
, all the R scripts are stored in files and executed from the command line via mpiexec, mpirun, etc. Save the following code in a file called "demo.r" ### Initial MPI library(pbdMPI, quiet = TRUE) init() .comm.size <- comm.size() .comm.rank <- comm.rank() ### Set a vector x on all processors with different values N <- 5 x <- (1:N) + N * .comm.rank ### All reduce x using summation operation y <- allreduce(as.integer(x), op = "sum") comm.print(y) y <- allreduce(as.double(x), op = "sum") comm.print(y) ### Finish finalize() and use the command mpiexec -np 4 Rscript demo.r to execute the code where Rscript is one of command line executable program.


Example 3

The following example modified from pbdDEMO illustrates the basic ddmatrix computation of pbdR which performs
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
on a given matrix. Save the following code in a file called "demo.r" # Initialize process grid library(pbdDMAT, quiet=T) if(comm.size() != 2) comm.stop("Exactly 2 processors are required for this demo.") init.grid() # Setup for the remainder comm.set.seed(diff=TRUE) M <- N <- 16 BL <- 2 # blocking --- passing single value BL assumes BLxBL blocking dA <- ddmatrix("rnorm", nrow=M, ncol=N, mean=100, sd=10) # LA SVD svd1 <- La.svd(dA) comm.print(svd1$d) # Finish finalize() and use the command mpiexec -np 2 Rscript demo.r to execute the code where Rscript is one of command line executable program.


Further reading

* * * * *
This article was read 22,584 times in 2012 since it posted on October 16, 2012 and ranked number 3 * * *


References


External links

* {{DEFAULTSORT:PbdR Cross-platform free software Data mining and machine learning software Data-centric programming languages Free statistical software Functional languages Numerical analysis software for Linux Numerical analysis software for macOS Numerical analysis software for Windows Parallel computing