Quick links

Computing Facilities

Last updated: November 2022

The Department of Computer Science at Princeton University maintains a dedicated computing and network infrastructure to support its academic and research mission. In addition to these resources, members of the CS department also have access to shared campus computing resources.

The CS Network has a backbone based on 10 Gb/sec Ethernet and spans several buildings including the Computer Science Building, portions of Sherrerd Hall (the Center for Information Technology Policy), 221 Nassau Street, the Friend Center, as well as a dedicated "island" of 25 racks within the main campus data center. The CS Network has a 40 Gb/sec uplink to the main campus network. The main campus has two 10 Gb/sec uplinks to the commodity Internet and two 100 Gb/sec uplinks to Internet2. Within the Computer Science Building, each room is wired with multiple Category 5/5e UTP ports, single-mode fiber, multimode fiber, and RG6 coaxial cable. The entire building is covered by an 802.11b/g/n/ac wireless network.

The department maintains a central, high-performance modular filesystem based on Isilon H7000 storage nodes with an InfiniBand interconnect. In its current configuration, there are approximately 1.1 PB of usable storage. The filesystem connects to the CS Network with a 20 Gb/sec uplink. All research data are protected by the equivalent of RAID-6 or better.

At any given time, there are hundreds of personal computers connected to the network running Windows, MacOS, or Linux. Personal computers can connect to the central file system using CIFS. For centralized, general purpose computing, the department has two clusters of machines running the Springdale Linux distribution (a custom Red Hat distribution maintained by members of the computing staff of Princeton University and the Institute for Advanced Study). The first cluster, cycles, consists of four servers; each server has dual, sixteen-core, 512 GB RAM, and 1 Gb/sec Ethernet connections to the network and file system. The second cluster, ionic, is a traditional beowulf cluster with an aggregate of 5928 cores, 34.3 TB RAM, 577 GPUs. Each node in the ionic cluster has a 1 Gb/sec Ethernet connection to a dedicated switch which, in turn, has a 10 Gb/sec uplink to the department backbone and file system. Both clusters are remotely accessible via SSH.

The Computer Science Building and the attached Friend Center building house several laboratory spaces (Graphics Lab, Systems Lab, Theory Lab, etc.) as well as our local co-location facility. The local co-location facility contains 10 racks that are available for department infrastructure and specialized research equipment. In addition, the CS department is currently assigned 25 racks at the University's main data center located approximately 2 miles from the Computer Science Building. These racks are also available for both department infrastructure and specialized research equipment.

In addition to the resources dedicated to the CS department, researchers also have access to campus resources such as the library system (over 11 million holdings), the DataSpace Repository (for long-term archiving and dissemination of research data), and shared computational clusters. The largest cluster, tiger, consists of 80 CPU+GPU nodes and 488 CPU nodes. The CPU+GPU nodes have an aggregate of 2,240 CPU cores (20 TB RAM) and 320 NVIDIA P100 GPUs (5 TB RAM); the CPU nodes have an aggregate of 16,320 Skylake cores (99 TB RAM). The nodes are interconnected using Omnipath.

Follow us: Facebook Twitter Linkedin