Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:hellbender [2025/06/18 15:55] – [GPU Node Lease] bjmfg8pub:hpc:hellbender [2025/09/11 19:57] (current) – [What is Hellbender?] bjmfg8
Line 9: Line 9:
 **Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System. **Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System.
  
-**Hellbender** consists of 208 mixed x86-64 CPU nodes (112 AMD, 96 Intel) providing 18,688 cores as well as 28 GPU nodes consisting of a mix of Nvidia GPU's (see hardware section for more details). Hellbender is attached to our Research Data Ecosystem ('RDE') that consists of 8PB of high performance and general purpose research storage. RDE can be accessible from other devices outside of Hellbender to create a single research data location across different computational environments.+**Hellbender** consists of 222 mixed x86-64 CPU nodes providing 22,272 cores as well as 28 GPU nodes consisting of a mix of Nvidia GPU's (see hardware section for more details). Hellbender is attached to our Research Data Ecosystem ('RDE') that consists of 8PB of high performance and general purpose research storage. RDE can be accessible from other devices outside of Hellbender to create a single research data location across different computational environments.
  
 ==== Investment Model ==== ==== Investment Model ====
Line 129: Line 129:
 | Dell R740xa | 17     | 64         | 238 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088    | Dell R740xa | 17     | 64         | 238 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088   
  
-*Update 06/2025: Additional GPU priority partitions cannot be allocated at this time as GPU investment has reached beyond the 50% threshold. If you require capacity beyond the general pool we are able to plan and work with your grant submissions to add additional capacity to Hellbender+***Update 06/2025: Additional GPU priority partitions cannot be allocated at this time as GPU investment has reached beyond the 50% threshold. If you require capacity beyond the general pool we are able to plan and work with your grant submissions to add additional capacity to Hellbender**
  
 **The 2025 pricing is: $7,692 per node per year.** **The 2025 pricing is: $7,692 per node per year.**
Line 256: Line 256:
 Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory. Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory.
  
-Dell R6620: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6620 node contains 1 TB DDR5 system memory.+Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 1 TB DDR5 system memory. 
 + 
 +Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 6 TB DDR5 system memory.
  
 | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** | | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** |
Line 263: Line 265:
 | Dell C6420 | 64        | 48             | 364 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      | | Dell C6420 | 64        | 48             | 364 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      |
 | Dell R6625 | 12        | 256            | 994 GB            | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      | | Dell R6625 | 12        | 256            | 994 GB            | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      |
-| Dell R6625 | 2         | 256            | 6034 GB           | AMD EPYC 9754 128-Core Processor         | 1.6 TB              | 512       c210-c221      | +| Dell R6625 | 2         | 256            | 6034 GB           | AMD EPYC 9754 128-Core Processor         | 1.6 TB              | 512       c222-c223      | 
-|            |                          |                                                            | Total Cores         | 22272     c222-c223      |+|            |                          |                                                            | Total Cores         | 22272                    |
  
 === GPU nodes === === GPU nodes ===
  
 | **Model**   | **Nodes** | **Cores/Node** | **System Memory** | **GPU**  | **GPU Memory** | **GPUs** | **Local Scratch** | **Cores** | **Node Names** | | **Model**   | **Nodes** | **Cores/Node** | **System Memory** | **GPU**  | **GPU Memory** | **GPUs** | **Local Scratch** | **Cores** | **Node Names** |
-| Dell R740xa | 17        | 64             238 GB            | A100     | 80 GB          | 4        | 1.6 TB            | 1088    | g001-g017      |+| Dell R750xa | 17        | 64             490 GB            | A100     | 80 GB          | 4        | 1.6 TB            | 1088    | g001-g017      |
 | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      | | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      |
 | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           | | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           |
Line 613: Line 615:
 Below is process for setting up a class on the OOD portal. Below is process for setting up a class on the OOD portal.
  
-  - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu.+  - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu. This can be also accomplished by filling out our course request form:  * **[[https://missouri.qualtrics.com/jfe/form/SV_6FpWJ3fYAoKg5EO|Hellbender: Course Request Form]]**
   - We will add the students to the group allowing them access to OOD.   - We will add the students to the group allowing them access to OOD.
   - If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account.   - If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account.
Line 804: Line 806:
  
 **Documentation**:http://docs.nvidia.com/cuda/index.html **Documentation**:http://docs.nvidia.com/cuda/index.html
 +
 +==== RStudio ====
 +
 +[[https://youtu.be/WuAwXMUYE_Y]]
  
 ==== Visual Studio Code ==== ==== Visual Studio Code ====