Nipype: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 72: Line 72:


</syntaxhighlight>
</syntaxhighlight>
In the result of the workflow, we can see:
* <code>Running serially</code>
* the output folder and the execution time for the mcflirt
* the output folder and the execution time for the mean


== Using Nipype Plugins ==
== Using Nipype Plugins ==
Line 84: Line 89:
=== Using Nipype's OAR plugin ===
=== Using Nipype's OAR plugin ===


For example, to execute the basic preprocessing example above with OAR resources, you need to call:
To use the OAR plugin, you need to call:


<code>preprocessing.run(plugin='OAR')</code>
<code>preprocessing.run(plugin='OAR')</code>
Line 95: Line 100:


Other parameters supported by OAR plugin are:
Other parameters supported by OAR plugin are:
* <code> template </code>: custom template file 'hello-world.sh' for batch job submission  
* <code>template</code>: custom template file 'hello-world.sh' for batch job submission  
* <code> max_jobname_len </code>: maximum length of the job name.  Default 15.  
* <code>max_jobname_len</code>: maximum length of the job name.  Default 15.  
 
The basic example above can be executed using OAR resources as follows:
 
<syntaxhighlight lang="python">
import nipype.interfaces.fsl as fsl
from nipype import Node, Workflow


preprocessing = Workflow(name='preprocessing', base_dir='/home/ychi/nipype')


# using fMRI's linear image registration tool for intra-modal motion correction
mcflirt = Node(fsl.MCFLIRT(), name='mcflirt')
mean = Node(fsl.MeanImage(), name='mean')
# add Nodes to Workflow
preprocessing.connect(mcflirt, 'out_file', mean, 'in_file')
mcflirt.inputs.in_file = '/home/ychi/nipype/ds000003/sub-01/func/sub-01_task-rhymejudgment_bold.nii.gz'
# Workflow execution
preprocessing.run(plugin='OAR', plugin_args={'queue': 'default'})
</syntaxhighlight>
Note the use of <code>base_dir</code> here. If no location is specified, Nipype will creates a temporary folder in the /tmp directory, which is not shared across remote resources. That's why we specify an NFS server here as <code>base_dir</code>, to be sure that computations are correctly performed on distant nodes. You should probably check the fsl installation and the input folder also.
In the result of the workflow, we can see:
* <code> Running in parallel</code>
* the output folder and the execution time for the mcflirt
* the output folder and the execution time for the mean


=== Using Nipype's Slurm plugin ===
=== Using Nipype's Slurm plugin ===
Line 107: Line 138:


Other parameters supported by Slurm plugin are:
Other parameters supported by Slurm plugin are:
* <code> sbatch_args </code>: command line args to be passed to sbatch
* <code>sbatch_args</code>: command line args to be passed to sbatch
* <code> template </code>: custom template file 'hello-world.sh' for batch job submission  
* <code>template</code>: custom template file 'hello-world.sh' for batch job submission  




== Pydra==  
== Pydra==  


Pydra is a part of the second generation of the Nipype ecosystem, which is meant to provide additional flexibility and reproducibility. Pydra rewrites Nipype engine with mapping and joining as first-class operations.
Pydra is a part of the second generation of the Nipype ecosystem, which is meant to provide additional flexibility and reproducibility.  
However, Pydra does not have OAR Support.
 
Pydra rewrites Nipype engine with mapping and joining as first-class operations.
 
However, Pydra does not have OAR Support yet.
 
Examples and details of Pydra's OAR extension can be found [[User:Apetit/Pydra|here]].
Examples and details of Pydra's OAR extension can be found [[User:Apetit/Pydra|here]].

Revision as of 17:57, 8 November 2023

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Nipype

What is Nipype?


Fninf-05-00013-g001.jpg

Nipype (Neuroimaging in Python Pipelines and Interfaces) is a flexible, lightweight and extensible neuroimaging data processing framework in Python.

It is a community-developed initiative under the umbrella of Nipy.

It addresses the heterogeneous collection of specialized applications in neuroimaging: SPM in MATLAB, FSL in shell, and Nipy in Python.

A uniform interface is proposed to facilitate interaction between these different packages within a single workflow.

The source code, issues and pull requests can be found here.

The fundamental parts of Nipype are Interfaces, the Workflow Engine and the Execution Plugins, as you can see in the figure at the left:

  • Interface: wraps a program or function
  • (Map)Node: wraps an Interface for use in a Workflow
  • Workflow: a graph or a forest of graphs whose edges represent data flow
  • Plugin: a component which describes how a Workflow should be executed

Among the execution plugins, you can find

  • an OAR plugin here.
  • an Slurm plugin here.


Installation

pip can be used to install the stable release of Nipype:

Terminal.png fontend.site:
pip install --user nipype

It is recommended to install python dependencies within a virtual environment. To do so, execute the following commands before running the pip command:

Terminal.png frontend.site:
python3 -m venv nipype
Terminal.png frontend.site:
source nipype/bin/activate
Terminal.png frontend.site:
pip install nipype


Basic usage

Let's assume that you have previously installed Nipype's dependencies and you have installed fsl to process the Rhyme judgment dataset available on OpenNeuro.

Here, we present an basic example of performing pre-processing using Nipype:

import nipype.interfaces.fsl as fsl
from nipype import Node, Workflow

preprocessing = Workflow(name='preprocessing')

# using fMRI's linear image registration tool for intra-modal motion correction
mcflirt = Node(fsl.MCFLIRT(), name='mcflirt')
mean = Node(fsl.MeanImage(), name='mean')

# add Nodes to Workflow
preprocessing.connect(mcflirt, 'out_file', mean, 'in_file')

mcflirt.inputs.in_file = '/home/ychi/nipype/ds000003/sub-01/func/sub-01_task-rhymejudgment_bold.nii.gz'

# Workflow execution
preprocessing.run()

In the result of the workflow, we can see:

  • Running serially
  • the output folder and the execution time for the mcflirt
  • the output folder and the execution time for the mean

Using Nipype Plugins

As shown in the figure above, Nipype's workflow engine supports a plugin architecture for workflow execution. The available plugins, such as SGE, PBS, HTCondor, LSF, Slurm or OAR allow local and distributed execution of workflows and debugging.

All plugins can be executed by calling:

workflow.run(plugin=PLUGIN_NAME, plugin_args=ARGS_DICT)


Using Nipype's OAR plugin

To use the OAR plugin, you need to call:

preprocessing.run(plugin='OAR')

Note: you can also provide traditional oarsub arguments by using the oarsub_args parameter:

preprocessing.run(plugin='oar', plugin_args=dict(oarsub_args='-q default')

Other parameters supported by OAR plugin are:

  • template: custom template file 'hello-world.sh' for batch job submission
  • max_jobname_len: maximum length of the job name. Default 15.

The basic example above can be executed using OAR resources as follows:

import nipype.interfaces.fsl as fsl
from nipype import Node, Workflow

preprocessing = Workflow(name='preprocessing', base_dir='/home/ychi/nipype')

# using fMRI's linear image registration tool for intra-modal motion correction
mcflirt = Node(fsl.MCFLIRT(), name='mcflirt')
mean = Node(fsl.MeanImage(), name='mean')

# add Nodes to Workflow
preprocessing.connect(mcflirt, 'out_file', mean, 'in_file')

mcflirt.inputs.in_file = '/home/ychi/nipype/ds000003/sub-01/func/sub-01_task-rhymejudgment_bold.nii.gz'

# Workflow execution
preprocessing.run(plugin='OAR', plugin_args={'queue': 'default'})

Note the use of base_dir here. If no location is specified, Nipype will creates a temporary folder in the /tmp directory, which is not shared across remote resources. That's why we specify an NFS server here as base_dir, to be sure that computations are correctly performed on distant nodes. You should probably check the fsl installation and the input folder also.

In the result of the workflow, we can see:

  • Running in parallel
  • the output folder and the execution time for the mcflirt
  • the output folder and the execution time for the mean

Using Nipype's Slurm plugin

For example, to execute the basic preprocessing example above with Slurm resources, you need to call:

preprocessing.run(plugin='SLURM')

Other parameters supported by Slurm plugin are:

  • sbatch_args: command line args to be passed to sbatch
  • template: custom template file 'hello-world.sh' for batch job submission


Pydra

Pydra is a part of the second generation of the Nipype ecosystem, which is meant to provide additional flexibility and reproducibility.

Pydra rewrites Nipype engine with mapping and joining as first-class operations.

However, Pydra does not have OAR Support yet.

Examples and details of Pydra's OAR extension can be found here.