Departmental Hardware

csb-network-resources-diagram_2017

Resources

The Department of Computational Biology uses a large number of computers to conduct its research in an efficient and effective manner.  These computers include high-end workstations in the offices and a number of rack-mounted Linux clusters in the server room.  The clusters are for running complex simulations, models, and computations that take a long time to complete or can run in parallel across nodes.  The server room itself has 2 chilled-water based InRow Air Conditioners and a traditional CRAC. It also has a 225kVA Uninterruptible Power Supply, and a FM200 fire suppression system. The building (BST3) provides backup power generator as well as chilled water plant.

The two In-Row air conditioners were installed in early 2010 as part of the 2nd phase of an air conditioning upgrade project.  We now have double the previous cooling capacity plus redundancy.  The room has maintained consistent cool temperature through a summer with several 90ºF+ weeks.  The pipes were sized and routed such that one more unit could be added in the future if necessary.

There are two clusters total. The first cluster is shared by the department and the second is reserved for exclusive use by one research group.

Cluster Nodes *  CPU cores total Mem total (GB) Storage total (TB) Infiniband ** 
1 166 3396 11057 302 36 DDR / 24 QDR
2 7 104 120 51 no
Totals 173 3500 11177 353 60 nodes

*Nodes include compute nodes only. The (3) Login and (15) Storage servers are extra.
**All clusters have Gigabit Ethernet. Infiniband speeds are 20 Gbps (DDR) and 40 Gbps (QDR)

GPU Computing:

In addition to the CPU cluster nodes, we are adding rackmount servers to house several Graphics Processing Units “GPUs”, which can be used for speeding up scientific computations.  In our GPU cluster are 102 various GPUs. Currently available GPUs consist of six nVidia GeForce GTX 690 cards, six GTX Titan Black cards, 28 nVidia Titan X cards, 2 nVidia GTX 780s, 32 nVidia GTX 780Ti cards, 16 nVidia GTX 980 cards, two nVidia Tesla M2090 cards, 7 Tesla K40 cards and three Tesla K20 cards. There are also 14 GPU workstations with cards varying between GTX 480 and Titan X.  Each workstation runs Linux and all use nVidia CUDA software development kit (SDK).  Software such as NAMD and Amber already support running on GPU hardware.
* Each GTX 690 card is considered to be two GTX 680s by the cluster.

Application Servers:

The department has a VMware vSphere 5 cluster for running multiple application servers on 3 ESXi hosts configured for N+1 High-Availability with a vCenter server managing all three.  The ESXi hosts each consist of dual 6-core Opterons with 32GB of ram.  The Virtual Machines are stored on a fully redundant fibre-channel attached Promise vTrak enterprise storage array configured with both SATA and SAS drives for reliability and increased performance when needed. The virtualized environment provides for high-availability with no single point of failure.

The department also uses a number of Windows and Linux servers for sharing files, printers, and other domain functions.  These servers also host several research software tools including VMD, GNM, ANM, and several network accessible databases of biological research data.

Storage:

200+TB full-redundant network attached storage (NAS). 11TB redundant Fibre-Channel (FC) array for VMware servers. 3TB of FC-attached storage for Windows servers.  Linux NAS, 17.3TB, for backing up linux workstations over the network. 160TB Network backup server for cluster data.