This page explains the partitions available to users and the accounting for each partition. The configuration of Snellius also allows users to use a node in a "shared" mode where they are able to use a subset of the resources of a full node, with implications for accounting.

This page assumes knowledge on partition usage and how to submit a job using SLURM. Please refer to the HPC user guide for a general introduction to these topics.

Snellius partitions

Compute nodes are grouped into partitions in order to allow the user to select different hardware to run their software on. Each partition includes a subset of nodes with a different type of hardware and a specific maximum wall time.

The partitions can be selected by users via the SLURM option:

#SBATCH --partition=<partition name>
or its short form:
#SBATCH -p <partition name>

The partitions available on Snellius are summarised in the table below. For details of the different hardware available on each node, please look at the Snellius hardware page.

The "Available memory per node" is the amount of memory available to users and what can be requested within a job. This value is smaller than the "Total memory" on the node as it doesn't include the memory reserved for the OS and other system's processes.  

Partition name

Node type 

# cores per node

Available memory per node

Smallest possible allocation

Max wall time

Notes

rome / thin

tcn (thin compute node, AMD Rome CPU)

128

224 GiB

1/8 node:

16 cores +
28 GiB memory

120 h (5 days)

The "thin" and “rome” partitions are currently aliases for the same set of nodes, this might change in the near future.
genoatcn (thin compute node, AMD Genoa CPU)192336 GiB

1/12 node:

16 cores +
28 GiB memory

120 h (5 days)

fat_rome

fcn (fat compute node, Rome)

128

960 GiB

1/8 node:

16 cores + 
120 GiB memory

120 h (5 days)

Access only through NWO Large Compute applications or Small Compute applications.
Please contact the service desk for more information.

fat_genoa

fcn (fat compute node, Genoa)

192

1440 GiB

1/12 node:

16 cores +
120 GiB memory

120 h (5 days)

Access only through NWO Large Compute applications or Small Compute applications.
Please contact the service desk for more information.

himem_4tb

hcn, PH.hcn4T (High memory node 4 TiB)

128

3840 GiB

1/8 node:

16 cores +
480 GiB memory

120 h (5 days)

Access only through NWO Large Compute applications or Small Compute applications.
Please contact the service desk for more information.

himem_8tb

hcn, PH.hcn8T (High memory node 8TiB)

128

7680 GiB

1/8 node:

16 cores +
960 GiB memory

120 h (5 days)

Access only through NWO Large Compute applications or Small Compute applications.
Please contact the service desk for more information.

gpu

gcn

72

480 GiB 

1/4 node:

18 cores + 1 GPU +
120 GiB memory

120 h (5 days)


gpu_mig

gcn (MIG)

72

480 GiB

1/8 node:

9 cores + 1 GPU (MIG) +
60 GiB memory

120 h (5 days)

Multi-Instance GPU (MIG) is NVIDIA technology that allows partitioning a single GPU into multiple instances, each fully isolated with its own high-bandwidth memory, cache, and compute cores. On Snellius, GPU (MIG) are partitioned into 2 independent instances, for a total of 8 GPU (MIG) per node.
  
gpu_visgcn72480 GiB

1/4 node:

18 cores + 1 GPU +
120 GiB memory

24h (1 day)The nodes in this partition are meant for (interactive) data visualization usage only, not for GPU compute. Access is restricted by default. Please contact the service desk to request access to this partition for visualization purposes.

staging

srv

16 (32 threads with SMT)

224 GiB

1 thread + 
7 GiB memory


120 h (5 days)

SMT is activated on the srv nodes, enabling up to 32 threads per node (2 threads/core).

Short jobs (at most 1 hour walltime)

Whenever you submit a job that uses at most 1 hour walltime to the “thin”, “fat” or “gpu” partitions, SLURM will schedule the job on a node that is only available for such short jobs. This effectively reduces the wait time for short jobs compared to longer jobs, which is useful for testing the setup and correctness of your jobs before submitting long-running production runs.

Note that the number of nodes that can run short jobs is relatively small. So submitting a short running job which uses many (e.g. tens or hundreds of) nodes will not work.

Accounting

Resource usage is measured in SBUs (System Billing Units). An SBU can be thought of as a “weighted” or "normalised” core hour. Because nodes differ in the type of CPUs, the amount of memory, and attached resources like a GPU or a local NVMe disk, SBUs are assigned and weighted per node type. On Snellius charging for resource usage is based on how long a resource was used (wall-clock time) in addition to the type and amount of nodes (or partial nodes) used. We also have a bit more verbose tutorial on how to estimate an SBU here.

The tables below shows the “SBU pricing” of core and GPU hours for the various node types.

Accounting (CPU nodes)


Node type 

Description

Weight 
(SBUs per core-hour)

# CPU cores per node

Smallest possible allocation

SBUs per 1 hour, full node

SBUs per 1 hour, smallest possible allocation

tcn rome

Thin compute node, AMD Rome CPU

1.0

128

1/8 node:

16 cores

128 SBUs

16 SBUs

tcn genoaThin compute node, AMD Genoa CPU1.0192

1/12 node:

16 cores

192 SBUs16 SBUs

fcn rome

Fat compute node, AMD Rome CPU

1.5

128

1/8 node:

16 cores

192 SBUs

24 SBUs

fcn genoa

Fat compute node, AMD Genoa CPU

1.5

192

1/12 node:

16 cores

288 SBUs

24 SBUs

hcn, PH1.hcn4T 

High memory node 4 TiB

2.0

128

1/8 node:

16 cores

256 SBUs

32 SBUs

hcn, PH1.hcn8T

High memory node 8 TiB

3.0

128

1/8 node:

16 cores

384 SBUs

48 SBUs

srv

Service node, for data transfer jobs

2.0

16 (32 threads with SMT)

1 thread

32 SBUs

1  SBU

Accounting (GPU nodes)

Accounting for GPU node types is on a "per GPU" basis". 

For example:

  • If I use 1 GPU for 1 hour, I will be charged: 1 GPU x  128 Accounting Weight Factor (per GPU) x 1 hour = 128 SBUs
  • If I use 1 GPU for a full day..... 1 GPU x 128 Accounting Weight Factor (per GPU) x 24 hours = 3072 SBUs 


 *Node TypeDescription

Accounting Weight Factor (per GPU) 


# GPUs per node# CPU cores per nodeSmallest possible allocationSBUs per 1 hour, full nodeSBUs per 1 hour, smallest possible allocation

gcn

4 GPUs enhanced compute node

128

4

72

1/4 node:

18 cores + 1 GPU

512 SBUs

128 SBUs

gcn (MIG)

8 MIG GPUs instances compute node

64

8 (each A100 is split in two)

72

1/8 node:

9 cores + 1 GPU (MIG)

512 SBUs

64 SBUs


Shared usage accounting 

It is possible to submit a single-node job on Snellius that uses only part of the resources of a full node. “Resources” here means either cores or memory of a node. The rules for shared resource accounting are described below. Example shared usage job scripts can be found here.

For single-node jobs (only), users can request part of a node's resources. Jobs that require multiple nodes will always allocate (and get charged for) full nodes, i.e. there are no multi-node jobs that share nodes with other jobs.

The requested resources, i.e. CPU and memory, will be enforced by cgroups limits. This means that when you request, say, 1 CPU core and 1 GB of memory, those will be the hardware resources your job gets access to, and only those (even if a node has more hardware resources).

However, the accounting of shared jobs using less than a full node is done in increments of 1/8th of a node (1/4th of a node for the GPU nodes). So any combination of memory and/or cores (or GPUs) will be rounded up to the next eight node (quarter for the GPU nodes), up to a full node. An eight of a node's resources is defined to be an eight of a node's total cores or total memory. The resource (memory/cores) that is requested at the highest fraction will define the resource allocation of the job. So requesting a quarter of the memory and half the CPU cores will lead to half the node being accounted.

For nodes with attached GPUs, a quarter of a node implies: 1 GPU + a quarter of the cores of the CPU and memory. 

Here is a list example shared usage allocations:

Shared CPU accounting examples
  • 1/8 node reservation
    • Single-node jobs requesting up to and including 16 cores for a rome or high memory node
    • Single-node jobs requesting up to and including 28 GiB memory on a thin node or 224 GiB on a fat node
  • 1/2 node reservation
    • Single-node jobs requesting up to and including 64 cores for a rome or high memory node
    • Single-node jobs requesting up to and including 120 GiB memory on a thin node or 480 GiB on a fat node
  • 3/4 node reservation
    • Single-node jobs requesting up to and including 96 cores for a rome or high memory node
    • Single-node jobs requesting up to and including 180 GiB memory on a thin node or 720 GiB on a fat node
  • Full node reservation
    • Jobs requesting all the cores in the node 
    • Jobs requesting all the memory of a node
Shared GPU accounting examples
  • 1/4 node reservation
    • Single-node jobs requesting up to and including 18 cores (or 1/4 of the node memory, or 1 GPU) for a GPU node

  • 1/2 node reservation
    • Single-node jobs requesting up to and including 36 cores (or 1/2 of the node memory, or 2 GPUs) for a GPU node

  • 3/4 node reservation
    • Single-node jobs requesting up to and including 54 cores (or 3/4 of the node memory, or 3 GPUs) for a GPU node

  • Full node reservation
    • For multi-node jobs independent of the number of cores (memory,  GPUs) requested

You will be charged for this share of the node independently from the number of cores actually used. 

Jobs requesting more than 1 node, will get exclusive access (only one job can run at the same time) to the allocated nodes, independent from the amount of core/memory requested. The batch system will accept jobs that request 1 node, 2 nodes, 3 nodes, and so on, providing exclusive use of all the cores, GPUs and memory on the node(s). It is important to note that Snellius is a machine designed for large compute jobs. We encourage users to develop workflows that schedule jobs running on at least a full node of a particular type.  

Service nodes

The "odd one out" node type is the service node (srv node). Srv nodes are dedicated for the automation of data transfer tasks. The transferring of data in or out of the system, is a task that does not involve much "compute" at all. Usually it is more limited by network bandwidth than by CPU resources. Therefore, jobs submitted to srv nodes by default are jobs using just a single thread out of the 32 available per node (on srv nodes we enabled SMT).

Core hours versus job time limit

The use of the unit of "core hour" above does not imply anything about the minimum or maximum duration of a job. The job scheduling and accounting systems have a time resolution of 1 second. Accounts will be budgeted only for the time they used the resources, independently from the requested walltime. 

How resources are accounted in terms of SBU budget subtracted differs between regular jobs and jobs run within a reservation:

  • For regular jobs (i.e. not part of a reservation) the wall clock time that is accounted is the time from the actual start time of allocation of the resources to the actual end time and de-allocation of the resources. If such a job ends before its reserved time limit (as specified with -t <duration>  to sbatch ) is over then only SBUs for the the actual run time in wall-clock time are consumed. Jobs that are submitted and subsequently cancelled before they ever were provided with an allocation of nodes do not consume any SBU budget.
  • A reservation will always be accounted for the full duration and set of resources reserved. This is even the case when all or part of the reserved resources are left idle, e.g. because smaller jobs than would be possible where run within the reservation.

Our HPC User Guide contains guidelines and several examples on how to request resources on our HPC systems. Check the Creating and running jobs section or the Example job scripts for more details. 


Costs of inefficient use

You will be charged for all cores in the node(s) that you reserved, regardless of the actual number of cores used by the job/application. So if your application uses only a few (or even one) of the CPU cores of a node then it makes sense to write a job script that runs multiple instances of this application in parallel, in order to fully utilize the reserved resources and your budget.

Getting account and budget information

You can view your account details using

$ accinfo

This shows information such as the e-mail associated with the account, the initial and remaining budget, and until when the account is valid.

An overview of the SBU consumption for the current account can be obtained with

$ accuse

By default, consumption is shown for the current login, per month, over the last year. Per day usage can be obtained by adding the -d flag. The start and end of the period shown in the overview can be changed with the -s DD-MM-YYYY and -e DD-MM-YYYY flags, respectively. Finally, consumption for a specific account or login can be obtained using -a accountname and -u username, respectively.

In case you want to know the CPU/GPU budgets separately, you can try:

accinfo --product=cpu
accinfo --product=gpu

Note that accinfo and accuse report the state of your account's budget as it is registered on the SURF central accounting server. The data on the central accounting server is updated asynchronously, typically only once every 24 hours. So the output of accuse and accinfo don't take into account recently finished or still running jobs. Use the budget-overview tool, described below, for this.

The budget-overview tool

Another option, with a slightly different focus, is the budget-overview  tool. This tool checks and/or reports the usage of your budget for batch jobs on Snellius. It reports more accurately how much budget you have left than some other SURF accounting tools, such as "accinfo" and "accuse" described above. The latter tools report the state of your account's budget as it is registered on the SURF central accounting server. However, the data on the central accounting server is updated asynchronously, typically only once every 24 hours.

The budget-overview tool interacts with the accounting server to get the last known centrally registered budget state, plus it interacts with the Slurm batch system. From the latter it can take recently finished jobs and jobs that are still active into account, and overall check and report more accurately how much budget is left, and how fast it diminishes during the day.

The budget-overview tool can also inform you about the cost of recently finished, active and queued jobs. It is a tool that is complementary to other SURF accounting tools. Since budget-overview only reports about jobs that have not yet been registered at the central accounting server it has a horizon of at most 24-48 hours (usually less). It is not suitable for producing overviews of, say, last month's batch usage. You need "accuse" for that sort of longer term overviews.

$ budget-overview