This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:hellbender [2025/01/31 18:50] – [Tutorial: Globus File Manager] nal8cf | pub:hpc:hellbender [2025/03/24 18:08] (current) – [Hardware] redmonp | ||
---|---|---|---|
Line 140: | Line 140: | ||
**What is the Difference between High Performance and General Performance Storage?** | **What is the Difference between High Performance and General Performance Storage?** | ||
- | On Pixstor, which is used for standard HPC allocations, | + | On Pixstor, which is used for standard HPC allocations, |
On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted. | On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted. | ||
Line 153: | Line 153: | ||
* Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts | * Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts | ||
+ | |||
+ | **Snapshots** | ||
+ | |||
+ | *VAST default policy retains 7 daily and 4 weekly snapshots for each share | ||
+ | *Pixstor default policy is 10 daily snapshots | ||
**__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https:// | **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https:// | ||
Line 221: | Line 226: | ||
* **[[https:// | * **[[https:// | ||
- | * **[[https:// | + | * **[[https:// |
* **[[https:// | * **[[https:// | ||
* **[[https:// | * **[[https:// | ||
Line 253: | Line 258: | ||
| **Model** | | **Model** | ||
| Dell C6525 | 112 | 128 | 490 GB | AMD EPYC 7713 64-Core | | Dell C6525 | 112 | 128 | 490 GB | AMD EPYC 7713 64-Core | ||
- | | Dell R640 | 32 | 40 | + | | Dell R640 | 32 | 40 |
- | | Dell C6420 | 64 | 48 | + | | Dell C6420 | 64 | 48 |
- | | Dell R6620 | 12 | 256 | 1 TB | + | | Dell R6620 | 12 | 256 | 994 GB |
| | | | | | ||
Line 264: | Line 269: | ||
| Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | | Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | ||
| Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | | Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | ||
- | | Dell R730 | 4 | 20 | + | | Dell R730 | 4 | 20 |
- | | Dell R7525 | 1 | + | | Dell R7525 | 1 |
- | | Dell R740xd | 3 | 44 | 384 GB | V100 | 32 GB | 3 | 240 GB | 132 | g026-g028 | | + | | Dell R740xd | 2 | 40 | 364 GB | V100 | 32 GB | 3 | 240 GB | 80 |
- | | | + | | Dell R740xd | 1 | 44 | 364 GB | V100 | 32 GB | 3 | 240 GB | 44 | g028 | |
+ | | | ||
A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | ||
Line 353: | Line 359: | ||
==== Open OnDemand ==== | ==== Open OnDemand ==== | ||
- | * https:// | + | * https:// |
- | * https:// | + | * https:// |
OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender' | OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender' |