Submitit
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Submitit
Submitit is a lightweight tool designed for submitting Python functions for computation within a Slurm cluster. It acts as a wrapper for submission and provides access to results, logs and more. Slurm, on the other hand, is an open-source, fault-tolerant, and highly scalable cluster management and job scheduling system suitable for both large and small Linux clusters. Submitit allows for seamless execution switching between Slurm or local environments. The source code, issues and pull requests can be found here.
To facilitate the switch between OAR and Slurm-based resource managers for users of Inria's national computing infrastructure, ongoing development efforts are focused on adding support for the OAR cluster in Submitit, through a plugin approach. The source code for the OAR plugin can be accessed here. A summary of the progress made, including implemented features and pending tasks can be found in this section.
Submitit installation
Using pip
pip can be used to install the stable release of submitit:
To use the last version including the OAR plugin, an installation from Source can be done:
fontend.site :
|
pip install --user git+https://gitlab.inria.fr/moyens-de-calcul/submitit.git@master#egg=submitit |
It is recommended to install python dependencies via a virtual environment. To do so, before running the pip
command:
Using conda
conda can be used to install submitit from the conda-forge:
To use the last version including the OAR plugin, you can create a conda environment file(e.g. "conda-env-submitit.yml") as:
name: submitit
dependencies:
- pip:
- git+https://gitlab.inria.fr/moyens-de-calcul/submitit.git@master#egg=submitit
and then install the last version of Submitit using this environment file
Basic usage
The example bellow shows the execution of an integer addition performed with the Submitit library. The AutoExecutor serves as the interface for submitting functions that are either executed on a cluster or on a local execution. It automatically detects the presence of a scheduler (such as OAR or Slurm). If no scheduler is detected, the function is executed locally on the machine.
import submitit
def add(a, b):
return a + b
# logs are dumped in the folder
executor = submitit.AutoExecutor(folder="log_test")
job_addition = executor.submit(add, 5, 7) # will compute add(5, 7)
output = job_addition.result() # waits for completion and returns output
# if ever the job failed, result() will raise an error with the corresponding trace
print('job_addition output: ', output)
assert output == 12
The example script needs to be launched on frontend as follow:
The execution of the submitit script will create the working folder (i.e., folder="log_test"), in which you will find the scheduler output files (i.e., jobId_log.out and jobId_log.err), the scheduler submission file jobId_submission.sh, the submitit pickles (i.e., a task file jobId_submitted.pkl and the result jobId_result.pkl).
Note | |
---|---|
The However, it is generally recommended to specify an explicit folder to avoid any conflicts with other files present in the current directory. Note that this folder may grow rapidly, especially if you have large checkpoints (see checkpoint section for details) or if you submit lot of jobs. You should think about cleaning up the folder, and even better an automated way of cleaning it. |
Advanced usage
Using parameters for specifying resources
For specifying resources (e.g., cores, memory, partition,...), you need to call the method update_parameters :
executor = submitit.AutoExecutor(folder="log_test")
executor.update_parameters(slurm_partition="cpu_devel", oar_queue="default")
The following table recaps the parameters supported (and unsupported) by AutoExecutor, OARExecutor and SlurmExecutor:
The scheduler specific parameters can also be used with scheduler specific Executors, without the scheduler name prefixes. For instance, let's consider OarExecutor, you do not need to add oar_ before the parameters walltime or queue:
executor = submitit.OarExecutor(folder="log_test")
executor.update_parameters(walltime="0:0:5", queue="default")
Example with OarExecutor
In the example bellow, we want a job with a specific walltime that requires 1 GPU on the production queue, and is executed in besteffort mode.
import submitit
def add(a, b):
return a + b
executor = submitit.AutoExecutor(folder="log_test")
executor.update_parameters(oar_queue="production", oar_walltime="0:2:0", nodes=1, gpus_per_node=1, oar_additional_parameters={"t": "besteffort"})
job_addition = executor.submit(add, 5, 7)
output = job_addition.result()
print('job_addition output: ', output)
assert output == 12
If we check the submission file submited to OAR, we can see all our requirements:
#!/bin/bash
# Parameters
#OAR -E /home/ychi/submitit_env/log_test/%jobid%_0_log.err
#OAR -O /home/ychi/submitit_env/log_test/%jobid%_0_log.out
#OAR -l /nodes=1/gpu=1,walltime=0:2:0
#OAR -n submitit
#OAR -q production
#OAR -t besteffort
# command
export SUBMITIT_EXECUTOR=oar
/home/ychi/submitit_env/env/bin/python -u -m submitit.core._submit /home/ychi/submitit_env/log_test
Checkpointing with Submitit
The purpose of the checkpointing functionality is to allow the automatic resumption of a computation when the job is preempted by another. To so do, the current state of the job needs to be saved before the scheduler (e.g., OAR, Slurm) stops the job.
As explained more in detailed in the official submitit documentation, you need to:
- define a class that implements the
submitit.helpers.Checkpointable
interface - implement a __call__ method that contains the computations code
- implement a checkpoint method to save the current state of the computation
import submitit
class MyCheckpointableAlgorithm(submitit.helpers.Checkpointable):
def __call__(self, ....):
....
def checkpoint(self, *args, **kwargs):
....
if __name__ == "__main__":
executor = submitit.AutoExecutor(folder)
myAlgo = MyCheckpointableAlgorithm()
job = executor.submit(myAlgo, ...)
print('output: ', job.result())
The submitit documentation provides a full example of implementing a checkpointing mechanism for a multinomial logistic regression based on the Scikit-learn library.
Slurm and OAR mechanisms for resuming preempted jobs
OAR and Slurm does not provides the same strategy for implementing the resumption of a preempted job:
- for Slurm scheduler, there is the possibility to requeue the current job directly from the node by executing the command
scontrol requeue jobId
(see requeue description in the scontrol man page for more information) - for OAR scheduler, there are two possibilities: either requeuing the job by executing
oarsub --resubmit=origin_job_id
(butoarsub
command is not available on the nodes) or activate an automatic requeuing mechanism by submitted the original job with the idempotent type.
How to implement the checkpointing mechanism
As explained in the previous section, to implement a checkpointing mechanism with submitit, you need to :
- in the __call__ method:
- if the OAR scheduler is used, you should wait for 60 seconds to ensure that when the computation starts, your job will be rescheduled if it is pre-empted.
- check if an existing state is stored on the persistent storage and retrieve it
- in the checkpointing method:
- save the state of your computation in the checkpointing method (e.g., files on a persistent storage)
- conclude the method by
return super().checkpoint(*args, **kwargs)
that will create a DelayedSubmission object corresponding the requeued job.
Template for Slurm
def __call__(self, ....):
... retrieve a potential existing state ...
... start computation ...
def checkpoint(self, *args: Any, **kwargs: Any) -> submitit.helpers.DelayedSubmission:
... save current computation state ...
return super().checkpoint(*args, **kwargs)
Template for OAR
def __call__(self, ....):
# wait 60s at first to ensure that the checkpointing mechanism is working
time.sleep(60)
... retrieve a potential existing state ...
... start computation ...
def checkpoint(self, *args: Any, **kwargs: Any) -> submitit.helpers.DelayedSubmission:
... save current computation state ...
return super().checkpoint(*args, **kwargs)
How to simulate a preemtion
If you want to test that the checkpointing mechanism is working properly with your algorithm, you can simulate a pre-emption by calling the _interrupt() method. The call to _interrupt() should be made after the job starts (i.e., running state) and waiting for a couple of seconds to ensure the computation starts. An example bellow:
...
job = ex.submit(myAlgo, ...)
# Wait for the job to be running.
while job.state() != "RUNNING":
time.sleep(1)
# Wait for 60s to let the algorithm running a while
time.sleep(60)
# Simulate preemption.
job._interrupt()
# Get the result
result = job.result()
print(f"Result: {result}")
To validate the good execution of the checkpointing, you should:
- check the scheduler to see that the job was stopped and then restarted
- check the output of your algorithm to see that the computation was saved and restarted not from the beginning
Job array with Submitit
If you need to start n times the same computation or need to execute n times the same algorithm with different parameter, submitit implement a dedicated functionality for such a usage case: Job Array. This functionnality leverages the Slurm or OAR job array feature. You should use job arrays as it is faster than submitted all jobs independently (as all jobs are submitted in a unique call to the scheduler).
To use job arrays with submitit, you need to call the executor.map_array
method. The example bellow shows how to execute 4 different additions (i.e., same algorithm applied with four distinct sets of parameter pairs: 1+10; 2+20; 3+30; 4+40):
import submitit
def add(a, b):
return a + b
a = [1, 2, 3, 4]
b = [10, 20, 30, 40]
executor = submitit.AutoExecutor(folder="log_test")
jobs = executor.map_array(add, a, b)
# Iterate over the 4 jobs and print their results
for job in jobs:
print("Job: ", job.job_id, " result: ", job.result())
The output is as follow:
Job: 1948179 result: 11
Job: 1948180 result: 22
Job: 1948178 result: 33
Job: 1948181 result: 44
Note | |
---|---|
Unlike the SlurmExecutor with |
Note | |
---|---|
As explained in the submitit documentation, one pickle file is created for each job of an array. If you have big object in your functions (like a full pytorch model) you should serialize it once and only pass its path to the submitted function. |
TODO : Job arrays through a context manager
Other advanced submitit features
Submitit provides advanced features like:
- Concurrent job executions (example available here)
- Multi-threading executions with
ThreadPoolExecutor
(example available here) - Integration with Asyncio for asynchronous job execution (example available here)
Current status of the OAR executor
The current OARExecutor implementation does cover all Submitit features. For now, it includes:
- Submission of functions
- Error handling and recording of stack traces with
job.result()
- Checkpointing of stateful callables and automatic resubmission from current state when preempted
- Job arrays
- Concurrent jobs
- Asyncio coroutines
The following features are not currently supported:
- Multi-tasks jobs
- Some parameters (as listed in the table of this section)