Run MPI On Grid'5000: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
Line 73: Line 73:
= Setting up and starting mpich-2 on a default environment using <code class=command>oarsh</code> =
= Setting up and starting mpich-2 on a default environment using <code class=command>oarsh</code> =
<code class=command>oarsh</code> is the default connector used when you reserve a node. To be able to use this connector, you need to specify it on the <code class=command>mpdboot</code> command line. So you can use exactly the same script than for the previous allow_classic_ssh part and replacing <code class=env>--rsh=ssh</code> by <code class=env>--rsh=oarsh</code>.
<code class=command>oarsh</code> is the default connector used when you reserve a node. To be able to use this connector, you need to specify it on the <code class=command>mpdboot</code> command line. So you can use exactly the same script than for the previous allow_classic_ssh part and replacing <code class=env>--rsh=ssh</code> by <code class=env>--rsh=oarsh</code>.
= Setting up and starting mpich-2 on a kadeploy image =
Now, you can try to create a kadeploy environment with mpich2 installed on it an then deploy a set of three machines to launch the mpi code in the previous section.

Revision as of 10:02, 9 April 2009

Warning.png Warning

Practical session under construction. Please do not attempt to follow this

Running MPI on Grid'5000

When attempting to run MPI on Grid'5000 you'll be faced with a number of challenges, ranging from classical setup problems for MPI software to problems specific to Grid'5000. This practical session aims at driving you through the most common uses cases, which are

  • setting up and starting mpich-2 on a default environment using allow_classic_ssh
  • setting up and starting mpich-2 on a default environment using oash
  • setting up and starting mpich-2 on a kadeploy image
  • setting up and starting mpich-2 to use high performance interconnect
  • setting up openMPI to accept private networks as routable between hosts

Pre-requisite

./configure --enable-echo --prefix=$HOME/mpich2-1.0.8 --enable-sharedlibs=none
make
  • Install it on your home directory
make install
  • Setup the libraries

Add specific environment variable to be able to use you mpich2 version :

$ vi ~/.bashrc
export LIBDIR=$PATH:/home/lyon/acedeyn/mpich2-1.0.8/lib
export PATH=$PATH:/home/lyon/acedeyn/mpich2-1.0.8/bin:/home/lyon/acedeyn/mpich2-1.0.8/sbin
  • Setup mpd (you may have a more complex configuration)
echo MPD_SECRETWORD=password >>$HOME/.mpd.conf

setting up and starting mpich-2 on a default environment using allow_classic_ssh

  • oarsub -I -t allow_classic_ssh -l nodes=3
  • Code to test :
$ mkdir -p $HOME/src/mpi
$ vi $HOME/src/mpi/tp.c
#include <stdio.h>
#include <mpi.h>
#include <time.h> /* for the work function only */

int main (int argc, char *argv []) {
       char hostname[257];
       int size, rank;
       int i, pid;
       int bcast_value = 1;

       gethostname (hostname, sizeof hostname);
       MPI_Init (&argc, &argv);
       MPI_Comm_rank (MPI_COMM_WORLD, &rank);
       MPI_Comm_size (MPI_COMM_WORLD, &size);
       if (!rank) {
            bcast_value = 42;
       }
       MPI_Bcast (&bcast_value,1 ,MPI_INT, 0, MPI_COMM_WORLD );
       printf("%s\t- %d - %d - %d\n", hostname, rank, size, bcast_value);
       fflush(stdout);

       MPI_Barrier (MPI_COMM_WORLD);
       MPI_Finalize ();
       return 0;
}


  • Compile your code
$ $HOME/mpich2-1.0.8/bin/mpicc src/mpi/tp.c -o src/mpi/tp
  • Use this script to launch
NODES=`uniq < $OAR_NODEFILE | wc -l | tr -d ' '`
echo "nodes = $NODES"
NPROCS=`wc -l < $OAR_NODEFILE | tr -d ' '`
echo "nprocs= $NPROCS"
$HOME/mpich2-1.0.8/bin/mpdboot --rsh=ssh --totalnum=$NODES --file=$OAR_NODEFILE
sleep 1
$HOME/mpich2-1.0.8/bin/mpirun -n $NPROCS $HOME/src/mpi/tp

Setting up and starting mpich-2 on a default environment using oarsh

oarsh is the default connector used when you reserve a node. To be able to use this connector, you need to specify it on the mpdboot command line. So you can use exactly the same script than for the previous allow_classic_ssh part and replacing --rsh=ssh by --rsh=oarsh.

Setting up and starting mpich-2 on a kadeploy image

Now, you can try to create a kadeploy environment with mpich2 installed on it an then deploy a set of three machines to launch the mpi code in the previous section.