Changes between Version 18 and Version 19 of MpiParallel


Ignore:
Timestamp:
2016-12-26T12:50:32Z (9 months ago)
Author:
Gary J. Ferland
Comment:

clean up master MpiParallel page

Legend:

Unmodified
Added
Removed
Modified
  • MpiParallel

    v18 v19  
    55C17 and later see [wiki:MpiParallelC17]
    66
    7 == Running a number of different models with a makefile ==
    8 
    9 Christophe Morisset wrote a makefile that can be used to run a number of models in parallel.
    10 Follow these steps:
    11 
    12 Create each simulation as a separate input script with names that can be identified with a wildcard search.
    13 I would use names ending in "{{{.in}}}" so that {{{ls *.in}}} finds all the input scripts you want to run. 
    14 Examples might be "{{{dog.in, cat.in, mouse.in, horse.in}}}".
    15 
    16 Download Christophe's makefile from
    17 [http://data.nublado.org/etc/Makefile here].
    18 Edit the Makefile to set the correct path to the Cloudy executable.
    19 To run the sims using N cores do
    20 
    21 {{{
    22 make -j N
    23 }}}
    24 
    25 The makefile includes an option to run only a subset of the models, by specifying part of the filenames.
    26 For instance, you could run the "{{{model1*.in}}}" set by specifying
    27 
    28 {{{
    29 make -j N name='model1'
    30 }}}
    31 
    32 -------
    33 
    34 == Using Gnu parallel ==
    35 
    36 Jane Rigby describes how to use GNU parallel to run many models in parallel in [http://tech.groups.yahoo.com/group/cloudy_simulations/message/1942 this] post on the
    37 [http://tech.groups.yahoo.com/group/cloudy_simulations/ Yahoo group].
    38 
    39 ------
    40 
    41 = The optimize and grid commands on MPI clusters =
    42 
    43 The makefile method described above will run a number of different models and create a number
    44 of output files.
    45 The next method describes how to build and run Cloudy to run grids or optimizations using MPI.
    46 This method is limited to the {{{grid}}} and {{{optimize}}} commands. 
    47 The resulting output will have the series of models concatenated into single files.
    48 
    49 == Building on an MPI system ==
    50 
    51 First you need to make sure that you have MPI installed on your computer. You will need MPI version 2 or newer to run Cloudy. On Linux machines you will typically have packages for MPICH2, LAM/MPI, and/or Open MPI (the latter is a further development of LAM/MPI, which is now in maintenance-only mode). All of these support MPI-2.
    52 
    53 The next step is to make sure that your account is aware of the MPI installation. On smaller systems this may involve the mpi-selector command, but this will depend on how your computer manager set up the system. The command '''mpi-selector --list''' will list the available choices, and you can select the MPI version of your choice with '''mpi-selector --set <name>'''. On HPC machines and clusters you may need to issue a '''module load''' command to make MPI visible. Several versions of MPI may be available. The command '''module avail''' should give a full list of all the available modules. The command '''module list''' will give a list of all the modules that are already loaded. When in doubt, contact your system administrator or helpdesk.
    54 
    55 Next build Cloudy in one of the MPI directories. These are under the source directory and support GNU gcc (sys_mpi_gcc) and Intel icc (sys_mpi_icc). Your system manager will tell you whether to use the GNU or Intel compiler. Please also check the main compilation page for supported versions of g++ and icc. Most MPI distributions (but not all!) will provide convenient wrapper scripts for the compiler, typically called mpiCC or mpicxx. The make file will try to find the wrapper script or make a best effort compilation when that fails. If the compilation fails, please contact your sysadmin or helpdesk for further advice.
    56 
    57 == Running the code ==
    58 
    59 On most systems the code should be executed with something like
    60 
    61 {{{
    62 mpirun -np 8 /path/to/cloudy/source/sys_mpi_gcc/cloudy.exe -r name
    63 }}}
    64 
    65 (the command may also be called mpiexec, orterun, etc...). The -np option specifies the number of ranks (cores), which is 8 in this example. For advice on how to choose the number of cores consult with your system manager. This depends on the number of cores per node, the amount of memory per core, etc.
    66 
    67 Note that using the -r option (or -p option) is mandatory. Normal input redirection will not work with Cloudy in MPI runs! In the example above, the code will read its commands from {{{name.in}}} and write the main output to {{{name.out}}}. If you use the -p option, additionally the save output files will go to {{{name<extension>}}} files, where the {{{<extension>}}} part is stated in the save or punch commands.
    68 
    69 == The optimize and grid commands ==
    70 
    71 Two Cloudy commands can take advantage of the MPI environment.
    72 They run Cloudy as an
    73 "embarrassingly parallel" application, putting one model on each rank.
    74 
    75 === The optimize command ===
    76 
    77 The optimizer is described in a Chapter of Hazy 1.
    78 It makes it possible to specify an observed spectrum (and several other observables) and ask the code to reproduce it.
    79 A number of parameters can be varied to obtain the best fit to this spectrum.
    80 
    81 The result of this run will be a single "best" model, the one that comes
    82 closest to reproducing the observations.
    83 
    84 The optimizer cannot use more ranks than two times the number of free parameters '''p''', so for optimal performance you should choose the number of ranks close to '''2*p/n''' (with '''n''' an arbitrary number >= 1). Using more than '''2*p''' ranks is pointless, unless you need the extra memory.
    85  
    86 === The grid command ===
    87 
    88 The grid command, described in a Chapter of Hazy 1,
    89 makes it possible to vary input parameters to create large grids of calculations.
    90 Several parameters can be varied and the result of the calculation
    91 will be predictions for each of the grid points.
    92 
    93 === Output with these commands ===
    94 
    95 Predictions are usually saved with one of the '''save''' commands described in Hazy 1.
    96 When run under MPI, the predictions will be brought together into large files which contain
    97 the grid points in the same order they would have had in a serial run (unless you specify the keyword '''separate''', in which case the output from each grid point will be saved in a separate file).
    98 
    99 There are two other useful options to consider.
    100 
    101 The '''save grid''' command will save the parameters for each model in the grid. It will also help identifying failed grid points and separated save output. Make it a habit of always including this in grid runs.
    102 
    103 The '''no hash''' option will prevent a hash string from separating different grid points.
    1047
    1058---------------