High Performance Computing (HPC)

Division of Information Technology (DoIT) offers a Central High Performance Computing (HPC) environment for Faculty computing in their pursuit of teaching and learning. This service includes shared cycles on several high performance clusters equipped with a variety of research software packages.

About the Central HPC

The Central HPC features a robust computing configuration, Tier 3 data center environment, and professional system administration for Faculty and Principal Investigator (PI) research. Unlike legacy practices, individual computing environments paid for by Faculty or PI and housed locally in unauthorized locations, the Central HPC is purchased and maintained in the campus data center and managed by DoIT technical staff. While this system is available for all Faculty and PI research, Faculty or PIs who invest in the HPC will receive priority to resources they purchased in addition to idle resources that may be available. This is what's known as an Institution/Condo Model.  

How the HPC Works

The Central HPC environment computes differently than a traditional computer or server. Authorized users will login to the shared environment and utilize a staging area to stage data and submit jobs for the scheduling system. Scheduled jobs will be queued with other Faculty jobs until resources are available and allocated. Jobs will compute on available resources until the job is complete.

workflow diagram of how the HPC works

CSULB HPC Configuration

  • Cores – 2936 distributed over 79 nodes
  • CUDA Cores – 168,960 total
  • GPU Nodes – 6 (total of 24 GPUs)
  • Total RAM – 31.5 TB
  • Internal network – Infiniband for high speed data transfer
  • On board storage –  1 TB/node, SSD-based, designed to use for scratch space
  • Long Term Storage – shared use of the campus Dell/EMC Isilon system
  • Operating System – Red Hat (RHEL) 7.x

Available Software

The Central HPC has a variety of software applications to support Faculty/PI research. For more information about HPC software, please visit the HPC Software article.

Target Users

Research Faculty, Principal Investigators (PI)

Getting Started

To request service, please review the article regarding the Institution/Condo Model to decide on how you'd like to participate.

Support

Support is available Monday - Friday from 8:00am - 5:00pm. To submit an issue, please email ITS-HPCSupport@csulb.edu and we will respond within 2 business hours. Assistance with formulating computation, code support, or using applications/software is not available from DoIT. We suggest peer support from other Condo Owners.

HPC Steering Committee

Enrico Tapavicza (CNSM, Chemistry/Biochem)

Eric Sorin (CNSM, Chemistry/Biochem)

Michael Peterson (CNSM, Physics & Astronomy)

Thomas Klähn (Klaehn) (CNSM, Physics & Astronomy)

Renaud Berlemont (CNSM, Biological Sciences)

Hamid Rahai (COE, Assoc Dean for Research and Graduate Programs)

Barbara Taylor (CNSM, Assoc Dean for Research and Development)

Meets each semester, with the option to meet more if needed.

Participating Condo Owners

Enrico Tapavicza (CNSM, Chemistry/Biochem)

Eric Sorin (CNSM, Chemistry/Biochem)

Michael Peterson (CNSM, Physics & Astronomy)

Thomas Klähn (Klaehn) (CNSM, Physics & Astronomy)

Paul Sun (CNSM, Mathematics and Statistics)

Hamid Rahai (COE, Engineering)

Print Article

Related Services / Offerings (1)

Division of Information Technology (DoIT) offers a Central High Performance Computing (HPC) environment for Faculty computing in their pursuit of teaching and learning. This service includes shared cycles on several high performance clusters equipped with a variety of research software packages.