Running mpi4py jobs on Pi

Install Spack package manager

$ cd
$ git clone https://github.com/sjtuhpcc/spack.git
$ cd spack
$ ./bootstrap.sh user --install

Add the following settings to ~/.bashrc or ~/.bash_profile.

# Spack package management
if [ -d "$HOME/spack" ]; then
    export SPACK_ROOT=$HOME/spack
    source $SPACK_ROOT/share/spack/setup-env.sh
fi  

Build mpi4py

$ spack install py-mpi4py ^openmpi@2.0.4+pmi schedulers=slurm %gcc@5.4.0

Submit a mpi4py job

mpi4py_test.py

#!/usr/bin/env python
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Program: mpi4py_test.py
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from mpi4py import MPI

nproc = MPI.COMM_WORLD.Get_size()   # Size of communicator
iproc = MPI.COMM_WORLD.Get_rank()   # Ranks in communicator
inode = MPI.Get_processor_name()    # Node where this MPI process runs

if iproc == 0: print "This code is a test for mpi4py."

for i in range(0,nproc):
    MPI.COMM_WORLD.Barrier()
    if iproc == i:
        print 'Rank %d out of %d' % (iproc,nproc)

MPI.Finalize()

mpi4py.slurm

#!/bin/bash
#SBATCH -J mpi4py_test
#SBATCH -o mpi4py_test.out
#SBATCH -e mpi4py_test.err
#SBATCH -p cpu128
#SBATCH -n 64
#SBATCH --ntasks-per-node=16

# Environment Modules
source /usr/share/Modules/init/bash

# Spack
if [ -d "$HOME/spack" ]; then
        export SPACK_ROOT=$HOME/spack
        source $SPACK_ROOT/share/spack/setup-env.sh
fi

source <(spack module tcl loads gcc@5.4.0)
source <(spack module tcl loads openmpi %gcc@5.4.0)
source <(spack module tcl loads --dependencies py-mpi4py %gcc@5.4.0)

srun -n $SLURM_NTASKS --mpi=pmi2 python mpi4py_test.py

Submit and check results

$ sbatch mpi4py.py
Rank 13 out of 64
Rank 18 out of 64
...
Rank 6 out of 64
Rank 14 out of 64
This code is a test for mpi4py.
Rank 0 out of 64
Rank 8 out of 64

Reference