The College Computing Cluster has been replaced by the new HPC System. Please visit that page.

To take advantage of the Cluster, you should be able to break apart your research problem into many small pieces that are independent of each other. Ideally, when every piece has completed its task, your project will be finished. Such a problem is sometimes referred to as being embarrassingly parallel. If, however, the pieces must repeatedly combine their results with the other pieces' output and perform additional calculations, they need to be able to exchange information with each other. Such a problem is called synchronously parallel. Both types of problems can be handled by the Cluster, but the latter type is harder to program and more resource-intensive due to the necessity of interprocess communication.

All programs must be able to run on the Cluster's hardware and operating system.

To request an account on the Computing Cluster, please contact Andy Anderson. He can also assist with the implementation of your project.

The Cluster is accessible over the Internet through the head node cluster.amherst.edu, using one of the methods described on the Unix Space page. The head node is the only machine from which you should develop software and submit and control jobs.

In most cases you will use the Cluster's job-control system, Condor. However, there may be some situations where you won't want to do that, e.g. when using Mathematica's built-in parallel computing features.

Complete documentation of Condor can be found here. Some basic instructions follow.

You should set up your cluster jobs in your home directory's subfolder cluster-scratch, whose contents are written locally by each computer in the cluster but are readable by all of them. Writing your output here is therefore very fast but must take place in uniquely named folders or files. Temporary files can be written in the local directory /tmp, which is periodically cleared automatically. Older projects should be moved to your home directory's subfolder cluster-archive for longer-term storage. 

All executing programs should be able to read parameters from the command line, from environment variables, or from files (i.e. if a program only prompts for data from the standard input, it may not work). Output can be written to the standard output or to files. If your jobs run for an extended period of time, you may wish to periodically save intermediate results in case of premature job termination, a process known as checkpointing. In many cases, Condor can provide low-level checkpointing and automatic job continuation.

There are three basic Condor environments in which to run jobs:

  1. Vanilla Universe: if you are using a program that is pre-compiled or if you want to manage your own job checkpointing and restarts.
  2. Standard Universe: if you can compile your own program, you can use Condor's libraries to implement checkpointing and automatic job continuation.
  3. Parallel Universe: if you can compile your own program and it is synchronously parallel, you can implement message passing and co-schedule your jobs.

Elaboration on these and other environments can be found in the documentation.

The following is an example of how to use Condor. It is hardly a set of comprehensive instructions, describing only the Vanilla Universe. However, it includes a few notes to go along with a simple executable and set of scripts from which you can build something more useful.


Basic Condor Commands

There are four basic commands that you need to know about:

  1. cluster_nodes: Gives an overview of the cluster computers and how busy they are, and whether or not a particular machine in the cluster is down. (Note: this is a non-standard application.)
  2. condor_status_pretty: Provides a more detailed view of the state of each computer in the cluster. There are four processors per machine, and you will see the load on each processor and how long the jobs have been running. (Note: this is a local variation of the standard condor_status that works with long node names.)
  3. condor_q: Shows the queue of Condor jobs, listing all of the jobs submitted in the order of their submission, which ones are running or idle, and (at the bottom) the total number in the queue.
  4. condor_rm: Removes a job or job cluster from the Condor queue.

Because of the large number of computers in the cluster, it is often useful to "pipe" the output of these commands into the command more, which will let you page through them slowly (by hitting the space bar). For example:

cluster_nodes | more

It is also useful to sometimes pick out parts of these very long listings with the command grep, which will only print lines that match a pattern. For example, to see just the running jobs, enter:

condor_q | grep ' R ' | more

which matches lines containing the letter R with a space before and after it.


Example Files

Running programs under Condor requires a few support files. In addition to your executable program you will need a command file to control the submission of jobs to the cluster. The command file must contain all of the command-line arguments for a particular job, along with a way to uniquely identify its output. Since those command-line arguments are likely to change from one experimental run to the next, it's useful to create some supporting command scripts that will automate the construction of the command file.

We list here the files that are involved, and then go into a more detailed description of their roles:

  • test.cc: Source code for the test program. Look into it, and you'll find it's a very simple C++ program. It merely takes an arbitrary positive integer as a command-line argument and prints ;some lines of dots to stdout.
  • Makefile: Instructions to the command make to compile test.cc into the executable program test-${HOSTTYPE}, which is ready to run on the CPU architecture named by the variable ${HOSTTYPE}. Note that currently we have only 64-bit  Intel-compatible architectures in the cluster, identified by x86_64.
  • test-one.sh: A shell script that runs a single instance of test-${HOSTTYPE} on any architecture.
  • test-all.sh: A shell script that submits a group of runs to Condor, which:
    • calculates the parameters that will be passed to test-one.sh for each run;
    • creates a set of directories to hold the output of each run;
    • constructs a Condor command file test.cmd;
    • and finally calls condor_submit with that command file, thus submitting the group of runs.

These files are available in this compressed tar archive and can be transferred to the cluster via SMB file sharing/mapping. Or, you can type or paste in this one-line command directly on the cluster:

curl https://www.amherst.edu/media/view/101119/original/test.tar.gz -o test.tar.gz


Using the Example

First unpack the example by entering the command:

tar xzvf test.tar.gz

This creates a subdirectory holding the example files, so change into that directory:

cd test

Then, compile the source code into the executable test-${HOSTTYPE} by entering:

make

If there were more types of CPU architectures in the cluster than just x86_64, you would need to log in to one of the nodes with the other architecture(s) and run make again to generate the other versions of test-${HOSTTYPE}.

Now try running test-one.sh by itself:

./test-one.sh 60

It will invoke the architecturally correct version of test-${HOSTTYPE}, and print something like:

Line length: 60
............................................................
............................................................
............................................................
............................................................
..........

Assuming that works, you can now try the other script to submit a group of jobs to Condor:

./test-all.sh

The output should look something like this:

Submitting job(s)........................................... 
Logging submit event(s)..................................... 
100 job(s) submitted to cluster 12476.

While  your jobs are executing (or waiting to execute), let's examine the command file test.cmd that's generated by this script:

## Global job properties
universe =     vanilla
notification = never
getenv =       true
initialdir =   /home/<username>/cluster-scratch/test
executable =   test-one.sh
requirements = (((Arch=="INTEL") || (Arch=="x86_64")) && (OpSys=="LINUX"))
priority =     5
....

You'll note that the command file, for every job in the job cluster, starts by telling Condor:

  • use the vanilla universe;
  • don't generate notifications when the jobs finish;
  • pass environment variables on to the executing jobs;
  • where the executable file is located;
  • the name of the executable file;
  • what the requirements are for resources such as CPU architecture, operating system, available RAM, etc. (important in a heterogeneous environment);
  • what priority to use when scheduling this job (this is only relative to your own jobs — your overall job priority relative to others is controlled by Condor based on order of submission and time of execution).

In addition, the command file provides instructions for the individual jobs:

## Task properties
arguments = 0
output = /home/<username>/test/results/0/out
error = /home/<username>/test/results/0/err
log = /home/<username>/test/results/0/log
queue
## Task properties
arguments = 1
output = /home/<username>/test/results/1/out
error = /home/<username>/test/results/1/err
log = /home/<username>/test/results/1/log
queue
....

After each set of instructions, the job is queued for execution. The first instruction is the list of arguments that will be provided with the executable when it begins, the same as when you started test-one.sh from the command line above. The next three instructions are files to store data related to the job. If you look at your current directory, you should see that the script creates a new subdirectory, named results, and in that directory you will see a set of subdirectories named 0 through 99, one for each job submitted. In each of those directories you will find these three files: out, err, and log. The first holds the output from your program (stdout), while the second holds any errors generated by your program (stderr). The third holds messages from Condor about that job's status, e.g. the following for job 1 in job cluster 12476:

000 (12476.001.000) 03/23 12:36:13 Job submitted from host: <148.85.78.21:49944>
000 (12476.001.000) 03/23 16:44:59 Job submitted from host: <148.85.78.21:49944>
...
001 (12476.001.000) 03/23 18:51:17 Job executing on host: <148.85.78.25:52070>
...
006 (12476.001.000) 03/23 18:51:25 Image size of job updated: 75428
...
005 (12476.001.000) 03/23 18:51:44 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:16, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:16, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
...

If you use the condor_q command (and you may want to pipe its output through more or grep), you should see your jobs at the end of the queue:

12476.0   <username>       3/23 16:44   0+00:00:00 I  5   9.8  test-one.sh 0
12476.1   <username>       3/23 16:44   0+00:00:00 I  5   9.8  test-one.sh 1
....

In this listing, note:

  • The full execution command with its arguments at the end of each line;
  • The "I" here means the jobs are idle, waiting to run, and once they start, that flag will change to "R";
  • The "5" is the priority you set in the command file.

You may also want to check the running times of all currently active jobs, as they can sometimes run for many hours each, thus preventing your test jobs from getting through for a while. If this is the case, you can let your jobs sit there if you want to see the results, or you can remove your job cluster, using the condor_rm command, like so:

condor_rm 12476

or, for a single job such as job 20:

condor_rm 12476 20


A summary

This simple example gives you a framework. You can extend it to submit jobs for runs over ranges of many values. It depends on how much you want to change the submission script. You can even completely rewrite the submission script in other languages such as Perl, if you prefer.

Good luck! Send questions to Andy Anderson if you have trouble.

Tags:  Done