File: debug.rst

package info (click to toggle)
nipype 0.5.3-2wheezy2
  • links: PTS, VCS
  • area: main
  • in suites: wheezy
  • size: 4,884 kB
  • sloc: python: 36,872; tcl: 597; makefile: 167
file content (56 lines) | stat: -rw-r--r-- 2,180 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
.. _debug:

==========================
Debugging Nipype Workflows
==========================

Throughout Nipype_ we try to provide meaningful error messages. If you run into
an error that does not have a meaningful error message please let us know so
that we can improve error reporting.

Here are some notes that may help debugging workflows or understanding
performance issues.

#. Always run your workflow first on a single iterable (e.g. subject) and
   gradually increase the execution distribution complexity (Linear->MultiProc->
   SGE).

#. Use the debug config mode. This can be done by setting::

      import config
      config.enable_debug_mode()

   as the first import of your nipype script.

#. There are several configuration options that can help with debugging. See
   :ref:`config_file` for more details::

       keep_inputs
       remove_unnecessary_outputs
       stop_on_first_crash
       stop_on_first_rerun

#. When running in distributed mode on cluster engines, it is possible for a
   node to fail without generating a crash file in the crashdump directory. In
   such cases, it will store a crash file in the `batch` directory.

#. All Nipype crashfiles can be inspected with the `nipype_display_crash`
   utility.

#. Nipype determines the hash of the input state of a node. If any input
   contains strings that represent files on the system path, the hash evaluation
   mechanism will determine the timestamp or content hash of each of those
   files. Thus any node with an input containing huge dictionaries (or lists) of
   file names can cause serious performance penalties.

#. For HUGE data processing, 'stop_on_first_crash':'False', is needed to get the
   bulk of processing done, and then 'stop_on_first_crash':'True', is needed for
   debugging and finding failing cases. Setting  'stop_on_first_crash': 'False'
   is a reasonable option when you would expect 90% of the data to execute
   properly.

#. Sometimes nipype will hang as if nothing is going on and if you hit Ctrl+C
   you will get a `ConcurrentLogHandler` error. Simply remove the pypeline.lock
   file in your home directory and continue.

.. include:: ../links_names.txt