wiki:DevelopingWithCloud

Developing with Cloud9 (and others)

name cores memory local disk pn_paris purchased notes
cloud9 28 256 GB /home66 39.4s 2016 / 9
nephos 20 128GB /home6 45.4s 2014 / 4
wolkje 16 64GB /home 50s cloudtests only
radegund 16 64GB /home 55.3s 2012 / 8 ubuntu
fog 4 16GB /home 39s 2014 not in cluster

Notes

Timings are for pn_paris run locally on each machine, using gcc and r8943.

df -h | grep home

will show which volumes are local and which are currently NFS mounted.

echo $HOME

will show your home directory.

To find information about cpu and memory do

less /proc/cpuinfo
less /proc/meminfo

cloud9

The original machine name ...

wolkje

Dutch for little cloud. Logins are disabled since this machine is dedicated to cloudtest runs.

nephos

Greek for cloud.

radegund

named after the smallest pub in Cambridge.

Intel C++ compiler

We have licensed the Intel C++ compiler icc. My .profile file includes the following:

source /opt/intel/composer_xe_2013/bin/compilervars.sh intel64

At the time of this writing this will use version 13.0.1. We also have 11.1 available.

To find out which versions are available do

which icc

to discover the location of the compilers, then do

ls /home66/home/opt/intel/

The directory /home66 will be different on different machines.

Portland group compiler

We have licensed this compiler but are in the process of removing it.

The user guide is at https://www.pgroup.com/doc/pgiug.pdf

pgCC is one of the few C++ compilers which includes array bounds checking as an option. The build directory sys_pgccBounds will use both the compiler bounds checking as well as the multi_arr class bounds checking.

This compiler includes a SDK for the NVIDIA Cuda we have on cloud9. This is located at /usr/local/cuda/NVIDIA_GPU_Computing_SDK

To use this compiler include this in your login configuration file, .profile on my account:

PGI=/opt/pgi
export PGI
path=$PATH:/opt/pgi/linux86-64/10.3/bin
export LM_LICENSE_FILE=$PGI/license.dat
export MANPATH=$MANPATH:/opt/pgi/linux86-64/10.3/man

We use version 12.8 which is located at

export PATH=$PATH:/opt/pgi/linux86-64/12.8/bin
export MANPATH=$MANPATH:/usr/pgi/linux86-64/12.8/man

To build the code use the source / sys_pgcc and sys_pgccBounds


Setting up mpi

We build in sys_mpi_gcc - NOT icc as on dlx

There are two versions of MPI available, openmpi-x86_64 and mpich2-x86_64, on the 64 bit machines. OpenMPI is loaded by default and is the one we use.

It is possible to load other versions but these may interfere with OpenMPI. To see what is available do

module available

Do

module list

to see what modules were loaded when you logged into the machine.

Running with mpi

You must compile the code with the same version of MPI that you use to run it.

To run do

mpirun -np NP /home66/gary/cloudy/trunk/source/sys_mpi_gcc/cloudy.exe -r $1

where NP is the number of processors you wish to assign to the job. All grids need the -r option and the name of the input file.


Notes on mpi

openmpi-x86_64

This is started by default and is the one we use.

mpich2-x86_64

This can be started with module, but interference with openmpi will cause a number of warnings. You must start mpd before running mpich2 jobs. Do this by issuing

mpd&

at a command prompt.

The mpich2 FAQ page is here.


Return to DeveloperPages

Return to main wiki page


Last modified 12 months ago Last modified on 2016-10-13T18:02:55Z