The Social Sciences High Performance Computing Cluster facilitates the use of major software packages with substantial resource requirements for research across the Social Sciences.
In this pilot phase, we are looking for projects that will help shape how SSHPCC can best support our users’ unique needs.
Interested UCLA Social Sciences faculty, graduate students, and researchers within the division are welcome to apply for time on SSHPCC via our intake form.
- Analyzing large datasets requiring high performance and large volume storage.
- Parallel processing of complex data or simulations.
- Sharing large amounts of data among multiple researchers within the same team.
Compute Node Specifications:
- Dell R7525
- 2 x AMD EPYC 7662 per node
- 128 Cores, 256 threads per node
- Clock speed: 2.0ghz base, up to 3.3ghz boost
- Memory capacity: 1TB DDR4-3200 DRAM
- Local storage: 960GB SATA SSD
- GPU: 1 x GeForce RTX 2080
GPU Node Specifications:
- AMD EPYC 7443P
- 48 cores, 96 threads per node
- Clock speed: 2.85ghz base, up to 4.0ghz boost
- Memory capacity: 256GB DDR4-3200 DRAM
- Local storage: 1TB NVMe SSD
- GPU: 2 x NVIDIA A40
Storage Node Specifications:
- Dell R740xd
- NFS Protocols
- 250TB ZFS
Networking:
- Dell s5248f-ON
- 25 Gigabit SFP28 Interconnect
- RDMA over Converged Ethernet (RoCE)*
Cluster Specifications:
- Total compute nodes: 5
- Total GPU nodes: 1
- Total GPUs: 6
- Total compute cores: 688
- Total memory: 5.25TB
- Total flash memory: 5.8TB
Cluster Software
- Cluster management: OpenHPC 2.0*
- Operating system: Rocky Linux 8*
- Scheduler and resource manager: SLURM
- Module Manager: LMOD, Easybuild
- Compilers: GCC 8.3, 9.3, 10.2, 10.3, 11.2, Intel 2019.4.281, 2020.1.217
- Message passing: OpenMPI 3.1.4, 4.0.3, Intel MPI 2018.5.288, 2019.7.217
Documentation
How to use SSHPCC (SSCERT HPC), including software, file transfer, job scheduling, and acknowledgement.