The General-Purpose Computing Cluster
The College’s general-purpose computing cluster has 120 CPU cores in 6 computers. A seventh computer, known as
cluster, is the head node, and is reserved for job management and software development, and is the only one you will usually log in to.
Nodes 1-3 each have:
- 32 cores (8 x 4 Intel® Xeon® E5-2665 64-bit CPUs) running at 2.4 GHz
- 256 GB of RAM (8 GB / core)
Nodes 4-6 are virtual machines, providing:
- 8 cores (4 x 2 Intel® Xeon® E5-2665 64-bit CPUs) running at 2.4 GHz
- 32 GB of RAM (4 GB / core)
In addition, each node has access to shared network disk space:
- 700 GB for general use, accessed through each user's home directory
- 1.5 TB for current projects, available through the directory
- 2 TB for old projects, accessed through
Each node also has 100 GB of local disk space available through the directory
If you don't need to access files written by your software (for example if you are using Condor’s parallel universe), then it is fastest to write any intermediate files in
/tmp. If you want to access the files after job completion, you should write them in
cluster-scratch since it is fast but also network accessible. Any files that you won't be using for a while can be moved to
cluster-archive for storage.
The Computing Cluster was originally set up in late 2005 with 52 CPUs in 26 computers, plus a single-CPU head node. In the Fall of 2008, an additional 408 CPU cores in 51 computers were added. In spring 2018 the Cluster was replaced with the new machines described above.
The Hadoop Cluster
The College’s Hadoop computing cluster has 32 CPU cores in 8 computers. A ninth computer, known as hadoop2, is the head node, reserved for job management, and is the only one you will usually log in to.
Each node has:
- 4 cores (2 x 2 Intel® Xeon® E5 64-bit CPUs) running at 2.4 GHz
- 8 GB of RAM (2 GB / core)
- 178-GB shared network disk space running the Hadoop file system
The Hadoop Cluster was originally set up in late 2016.