srun
#!/bin/bash
#SBATCH --account=myprojectname
#SBATCH --partition=test
#SBATCH --ntasks=1
#SBATCH --time=00:02:00
srun hostname
srun sleep 60
--ntasks=1
) for two minutes (--time=00:02:00
) from the test queue (--partition=test
)hostname
, that will print the name of the Puhti computing node that has been allocated for this particular job.sleep
program to keep the job running for an additional 60 seconds, in order to have time to monitor the jobmy_serial.bash
and change the myprojectname
to the project you actually want to usesbatch my_serial.bash
squeue -u $USER
slurm-XXXXXXX.out
where XXXXXXX
is a unique number corresponding to the job ID of the jobseff XXXXXXX
(replace XXXXXXX
with the actual job ID number from the slurm-XXXXXXX.out
file)squeue -u $USER
scancel XXXXXXX
wget https://a3s.fi/hello_omp.x/hello_omp.x
chmod +x hello_omp.x
#!/bin/bash
#SBATCH --account=myprojectname
#SBATCH --partition=test
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --time=00:00:10
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun hello_omp.x
--cpus-per-task=4
) for ten seconds (--time=00:00:10
) from the test queue (--partition=test
)hello_omp.x
, that will be able to utilise four coresOMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
tells the program that it can use four coreshello_omp.x
will print their own outputmy_parallel_omp.bash
and change the myprojectname
to the project you actually want to usesbatch my_parallel_omp.bash
slurm-XXXXXXX.out
should contain the results printed from the four OpenMP threadscat slurm-XXXXXXX.out
command:cat slurm-5118404.out
Hello from thread: 0
Hello from thread: 3
Hello from thread: 2
Hello from thread: 1
wget https://a3s.fi/hello_mpi.x/hello_mpi.x
chmod +x hello_mpi.x
#!/bin/bash
#SBATCH --account=myprojectname
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --time=00:00:10
srun hello_mpi.x
--nodes=2
), and four cores from each node (--ntasks-per-node=4
) for ten seconds (--time=00:00:10
) from the test queue (--partition=test
)hello_mpi.x
, that will, based on the resource request, start 8 simultaneous taskshello_mpi.x
will report on which node they got their resourcemy_parallel.bash
and change the myprojectname
to the project you actually want to usesbatch my_parallel.bash
slurm-XXXXXXX.out
should contain the results obtained by the hello_mpi.x
program on how the 8 tasks were distributed over the two reserved nodescat slurm-XXXXXXX.out
command:cat slurm-5099873.out
Hello world from node r07c01.bullx, rank 0 out of 8 tasks
Hello world from node r07c02.bullx, rank 5 out of 8 tasks
Hello world from node r07c02.bullx, rank 7 out of 8 tasks
Hello world from node r07c01.bullx, rank 2 out of 8 tasks
Hello world from node r07c02.bullx, rank 4 out of 8 tasks
Hello world from node r07c01.bullx, rank 3 out of 8 tasks
Hello world from node r07c01.bullx, rank 1 out of 8 tasks
Hello world from node r07c02.bullx, rank 6 out of 8 tasks
r07c01.bullx, r07c02.bullx
), four tasks on eachseff XXXXXXX
(replace XXXXXXX
with the actual job ID number from the slurm-XXXXXXX.out
file)squeue -u $USER
scancel XXXXXXX
In an interactive batch job, an interactive shell session is launced on a computing node. For heavy interactive tasks one can request specific resources (time, memory, cores, disk).
You can also use tools with graphical user interfaces in an interactive shell session. For such usage the NoMachine remote desktop often provides an improved experience.
sinteractive --account myprojectname --time 00:10:00
sinfo
command gives an overview of the partitions(queues) offered by the computersqueue
command shows the list of jobs which are currently queued (they are in the RUNNING state, noted as ‘R’) or waiting for resources (noted as ‘PD’, short for PENDING)squeue -u $USER
lists your jobs