Loading [Contrib]/a11y/accessibility-menu.js

A Quick Example (NOT Hellow World!)

This is a standard example coming with Rmpi to approximate $\pi \approx 3.1415926$


SPMD Basic Steps:


Parallel (SPMD) code:

See ex_pi_spmd.r for ultra-large/unlimited $N$

# File name: ex_pi_spmd.r
# Run: mpiexec -np 2 Rscript --vanilla ex_pi_spmd.r

### Load pbdMPI and initial the communicator.
library(pbdMPI, quiet = TRUE)
init()

### Compute pi.
n <- 1000
totalcpu <- .comm.size
id <- .comm.rank + 1
mypi <- 4*sum(1/(1+((seq(id,n,totalcpu)-.5)/n)^2))/n    # The example from Rmpi.
mypi <- reduce(mypi, op = "sum")

### Output from RANK 0 since mpi.reduce(...) will dump only to 0 by default.
comm.print(mypi)
finalize()

Run SPMD code in a command mode: (Batch Job)

Note that the possible commands to invoke MPI could be mpiexec, mpiexec, orterun, or mpiexec.exe which are totally dependent on the operating systems and the MPI systems. Also, see Rscript for the usages.

SHELL> mpiexec -np 2 Rscript --vanilla ex_pi_spmd.r

Run SPMD code in an interactive mode: (Master/Worker)

For OpenMPI, you need a file .Rprofile in the working directory or your home directory. This file can be copied from the installed directory of Rmpi. By default, it is located at $R_HOME/library/Rmpi/Rprofile. Then, you can have an interactive mode by initial mpirun and R (NOT Rscript) at which .Rprofile is located.

SHELL> mpirun -np 2 R --no-save -q
###
### Some messages will show the workers are running.
### The "spawn" is no needed for OpenMPI anymore.
###
R> # library(Rmpi)
R> # mpi.spawn.Rslaves()                    # Require for LAM/MPI.

R> mpi.bcast.cmd(source("ex_pi_spmd.r"))    # Worker go first and wait Manager.
R> source("ex_pi_spmd.r")                   # Manager runs and collects results.

### Remark mpi.quit() in "ex_pi_spmd.r" to avoid terminate mpirun and R.

Maintained: Wei-Chen Chen
E-Mail: wccsnow at gmail dot com
Last Updated: December 30, 2016
Created: October 19, 2011