#### A Quick Example (NOT Hellow World!) This is a standard example coming with <code>Rmpi</code> to approximate $\pi \approx 3.1415926$ --- #### SPMD Basic Steps: - Step: 0. Initialize. (<code>init</code>) 1. Read data partially by workers. 2. Compute. (<code>send</code>, <code>recv</code>, <code>barrier</code>, ...) 3. Receive results among workers. (<code>gather</code>, <code>allgather</code>, <code>reduce</code>, <code>allreduce</code>, ...) 4. Finalize (<code>finalize</code>). --- #### Parallel (SPMD) code: See <a href="./cookbook/ex_pi_spmd.r" target="_blank">ex_pi_spmd.r</a> for ultra-large/unlimited $N$ ``` # File name: ex_pi_spmd.r # Run: mpiexec -np 2 Rscript --vanilla ex_pi_spmd.r ### Load pbdMPI and initial the communicator. library(pbdMPI, quiet = TRUE) init() ### Compute pi. n <- 1000 totalcpu <- .comm.size id <- .comm.rank + 1 mypi <- 4*sum(1/(1+((seq(id,n,totalcpu)-.5)/n)^2))/n # The example from Rmpi. mypi <- reduce(mypi, op = "sum") ### Output from RANK 0 since mpi.reduce(...) will dump only to 0 by default. comm.print(mypi) finalize() ``` --- #### Run SPMD code in a command mode: (Batch Job) Note that the possible commands to invoke MPI could be <code>mpiexec</code>, <code>mpiexec</code>, <code>orterun</code>, or <code>mpiexec.exe</code> which are totally dependent on the operating systems and the MPI systems. Also, see <a href="./rscript.html">Rscript</a> for the usages. ``` SHELL> mpiexec -np 2 Rscript --vanilla ex_pi_spmd.r ``` --- #### Run SPMD code in an interactive mode: (Master/Worker) For OpenMPI, you need a file <font color="red"><b><code>.Rprofile</code></b></font> in the working directory or your home directory. This file can be copied from the installed directory of <code>Rmpi</code>. By default, it is located at <font color="red"><b><code>$R_HOME/library/Rmpi/Rprofile</code></b></font>. Then, you can have an interactive mode by initial <code>mpirun</code> and <code>R</code> (NOT <code>Rscript</code>) at which <code>.Rprofile</code> is located. ``` SHELL> mpirun -np 2 R --no-save -q ### ### Some messages will show the workers are running. ### The "spawn" is no needed for OpenMPI anymore. ### R> # library(Rmpi) R> # mpi.spawn.Rslaves() # Require for LAM/MPI. R> mpi.bcast.cmd(source("ex_pi_spmd.r")) # Worker go first and wait Manager. R> source("ex_pi_spmd.r") # Manager runs and collects results. ### Remark mpi.quit() in "ex_pi_spmd.r" to avoid terminate mpirun and R. ``` --- <div w3-include-html="./preamble_tail_date.html"></div>