To facilitate faculty and student research, the College operates a 460-core computing cluster consisting of 77 computers or nodes. Such distributed computing is related to the ideas of parallel computing (which could occur within a single computer with multiple CPUs) and grid computing (describing a large number of mostly independent computers that cooperate on a project).
To request an account on the Computing Cluster, please contact Andy Anderson. He can also assist with the implementation of your project.
The Cluster is accessible over the Internet through a head node, using either Remote Desktop Connection, X11 Connection, or Secure Shell (ssh – standard with the Mac Terminal, or putty for Windows). The head node is the only machine from which you should develop software and submit and control jobs.
Most Cluster users use the Condor system to define how each instance of the software should be run (called a job), distribute the jobs automatically to the available nodes, and ensure sharing of the computing resources amongst all of the users.
There may be some situations where you won't want to use Condor, e.g. when using Mathematica's built-in parallel computing features.
More information and some examples can be found in the Knowledge Base.
A complete example using Condor and a problem written using the Python programming language can be found here.