Facilities & Supercomputing Capabilities
Facilities and Equipment
SIMCenter lab is based on an architecture used in industry that provides a flexible yet powerful computing environment to researchers. This environment allows engineers to connect to high performance computing remotely and pushes all of the effort of model manipulation, pre-processing, and post-processing off the local machines. We also offer a wide range of ever changing software.
High Performance Capabilities
Modern and future CAE tools often require significant computing power. To facilitate this, SIMCenter has purchased their own Virtual Desktop Infrastructure (VDI) and partnered with the Ohio Supercomputer Center (OSC) to leverage their extensive supercomputing environment.
All of this is enabled by a high-speed fiber optic connection from SIMCenter facilities in Smith Lab to OSC.
As a leader in high performance computing and networking, OSC is a vital resource for Ohio's scientists and engineers. OSC’s cluster computing capabilities make it a fully scalable center with mid-range machines to match those found at National Science Foundation centers and other national labs.
OSC provides statewide resources to help researchers making discoveries in a vast array of scientific disciplines. Beyond providing shared statewide resources, OSC works to create a user-focused, user-friendly environment for our clients.
Collectively, OSC supercomputers provide a peak computing performance of 1,768 teraflops. The center also offers more than 14.0 petabytes of disk storage capacity distributed over several file systems, plus 5.5 petabytes of backup tape storage.
• Pitzer Cluster: A 10,240-core Dell Intel Gold 6148 machine
• 216 nodes have 40 cores per node and 192 GB of memory per node
• 4 nodes have 80 cores, 3.0 TB of memory, for large Symmetric Multiprocessing (SMP) style jobs
• 32 nodes have 40 cores, 384 GB of memory, and 2 NVIDIA Tesla V100 GPUs
• Theoretical system peak performance 720 teraflops (CPU only)
• IME SSD for the /fs/scratch file system;
• Owens Cluster: A 23,392-core Dell Intel Xeon E5-2680 v4 machine
• 648 nodes have 28 cores per node and 128 GB of memory per node
• 16 nodes have 48 cores, 1.5 TB of memory, for large Symmetric Multiprocessing (SMP) style jobs
• 160 nodes have 28 cores, 128 GB or memory, and 1 NVIDIA Tesla P100 GPU
• Theoretical system peak performance 750 teraflops (CPU only)
• IME SSD for the /fs/scratch file system
• Ruby Cluster: A 4,800-core HP Intel Xeon machine
• 20 cores per node and 64GB of memory per node
• One node has 1 TB of memory and 32 cores, for large SMP style jobs
• 20 nodes have NVIDIA Tesla K40 GPUs
• GPU Computing: All OSC systems now support GPU Computing. Specific information is given on each cluster's page.
• Ruby: 20 NVIDIA Tesla K40
• Owens: 160 NVIDIA Tesla P100
• Pitzer: 64 NVIDIA Tesla V100 (two each on 32 nodes)
To serve your HPC needs, we currently offer:
• 30 virtual machines
• 16vCP & 64GB RAM availability
• Convenience with access at any time, from any device
• Access to any software from any VM