Last updated: July 11, 2017
The Department of Computer Science at Princeton University maintains a dedicated computing and network infrastructure to support its academic and research mission. In addition to these resources, members of the CS department also have access to shared campus computing resources.
The CS Network has a backbone based on 10 Gb/sec Ethernet and spans several buildings including the Computer Science Building, portions of Sherrerd Hall (the Center for Information Technology Policy), 221 Nassau Street, the Friend Center (two research labs and workroom containing 8 Linux computers for academic use), as well as a dedicated "island" of 25 racks within the main campus data center. The CS Network has a 40 Gb/sec uplink to the main campus network. The main campus has two uplinks to the commodity Internet (10 Gb/sec each to different providers) and a single 10 Gb/sec uplink to Internet2. Within the Computer Science Building, each room is wired with multiple Category 5/5e UTP ports, single-mode fiber, multimode fiber, and RG6 coaxial cable. The entire building is covered by an 802.11b/g/n/ac wireless network.
The department maintains a central, high-performance modular filesystem with an InfiniBand interconnect. In its current configuration, there are approximately 500 TB of usable storage. The filesystem connects to the CS Network with a 20 Gb/sec uplink. All research data are protected by the equivalent of RAID-6 or better.
At any given time, there are hundreds of personal computers connected to the network running Windows, MacOS, or Linux. Personal computers can connect to the central file system using CIFS. For centralized, general purpose computing, the department has two clusters of machines running the Springdale Linux distribution (a custom Red Hat distribution maintained by members of the computing staff of Princeton University and the Institute for Advanced Study). The first cluster, cycles, consists of four servers; each server has dual, fourteen-core processors, 384GB RAM, and 1 Gb/sec Ethernet connections to the network and file system. The second cluster, ionic, is a traditional beowulf cluster with an aggregate of 956 cores and 10 TB RAM. Each node in the ionic cluster has a 1 Gb/sec Ethernet connection to a dedicated switch which, in turn, has a 10 Gb/sec uplink to the department backbone and file system. Both clusters are remotely accessible via SSH.
The Computer Science Building and the attached Friend Center building house several laboratory spaces (Graphics Lab, Systems Lab, Theory Lab, etc.) as well as our local co-location facility. The co-location facility contains 10 racks that are available for department infrastructure and specialized research equipment. In addition, the CS department is currently assigned 25 racks at the University's main data center located approximately 2 miles from the Computer Science Building. These racks are also available for both department infrastructure and specialized research equipment.
In addition to the resources dedicated to the CS department, researchers also have access to campus resources such as the library system (over 7 million volumes), the DataSpace Repository (for long-term archiving and dissemination of research data), and shared computational clusters. The largest cluster, tiger2, consists of 80 nodes with an aggregate of 2240 cores, 20 TB RAM, and 320 GPUs all interconnected with Omnipath.