AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Python slurm11/24/2023 ![]() The example code below shows a script that generates some matrices and performs matrix multiplication. When running a job that consists of a single process (referred to as a task in the language of SLURM) that can spawn multiple threads, the SLURM cpus-per-task option should be set in order to request the use of multiple cores on the compute node. The options will be, respectively, cpus-per-task and ntasks-per-node. The two examples below show the options that should be set for running a (shared memory) multithreaded job and for running a (non-shared-memory) multiprocess job. There are some small differences in the options needed when running jobs that depend on the parallel computing paradigm used. When you’re done, you can type exit to return to the head node. Try typing hostname to see that you’re actually logged into one of the compute nodes. It will appear that nothing has changed, but your session will actually be running on one of the compute nodes. Recall that you can see the results of the jobs in two files of the form python_job-%j.log. Run the following commands to submit a couple jobs. Here $1 corresponds to the first argument that will be fed into run_python.slurm when we submit our job. ![]() Here we just add the module for python, version 3.7.3. The %j part of the file name will give the job number assigned by the scheduler.Īdd necessary modules to your environment. ![]() #SBATCH -out=python_job-%j.log asks to print any outputs of the job to a log file with the specified name format.#SBATCH -time=1:30:00 asks to reserve the node for 1.5 hours.#SBATCH -nodes=1 asks for one node on the cluster.#!/bin/bash #SBATCH -nodes=1 #SBATCH -time=1:30:00 #SBATCH -out=python_job-%j.log # load python interpreter module load python/3.7.3 # run the python script, given as command line argument python $1
0 Comments
Read More
Leave a Reply. |