1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202
|
#!/usr/bin/env python
from distutils.core import setup
#from setuptools import find_packages
import sys, os
if not sys.version_info[0:2] >= (2,6):
sys.stderr.write("Requires Python later than 2.6\n")
sys.exit(1)
# quickly import the latest version of ruffus
sys.path.insert(0, os.path.abspath("."))
import ruffus.ruffus_version
sys.path.pop(0)
module_dependencies = []
#module_dependencies = ['multiprocessing>=2.6', 'simplejson']
setup(
name='ruffus',
version=ruffus.ruffus_version.__version, #major.minor[.patch[.sub]]
description='Light-weight Python Computational Pipeline Management',
maintainer="Leo Goodstadt",
maintainer_email="ruffus_lib@llew.org.uk",
author='Leo Goodstadt',
author_email='ruffus@llew.org.uk',
long_description=\
"""
***************************************
Overview
***************************************
The Ruffus module is a lightweight way to add support
for running computational pipelines.
Computational pipelines are often conceptually quite simple, especially
if we breakdown the process into simple stages, or separate **tasks**.
Each stage or **task** in a computational pipeline is represented by a python function
Each python function can be called in parallel to run multiple **jobs**.
Ruffus was originally designed for use in bioinformatics to analyse multiple genome
data sets.
***************************************
Documentation
***************************************
Ruffus documentation can be found `here <http://www.ruffus.org.uk>`__ ,
with `download notes <http://www.ruffus.org.uk/installation.html>`__ ,
a `tutorial <http://www.ruffus.org.uk/tutorials/new_tutorial/introduction.html>`__ and
an `in-depth manual <http://www.ruffus.org.uk/tutorials/new_tutorial/manual_contents.html>`__ .
***************************************
Background
***************************************
The purpose of a pipeline is to determine automatically which parts of a multi-stage
process needs to be run and in what order in order to reach an objective ("targets")
Computational pipelines, especially for analysing large scientific datasets are
in widespread use.
However, even a conceptually simple series of steps can be difficult to set up and
maintain.
***************************************
Design
***************************************
The ruffus module has the following design goals:
* Lightweight
* Scalable / Flexible / Powerful
* Standard Python
* Unintrusive
* As simple as possible
***************************************
Features
***************************************
Automatic support for
* Managing dependencies
* Parallel jobs, including dispatching work to computational clusters
* Re-starting from arbitrary points, especially after errors (checkpointing)
* Display of the pipeline as a flowchart
* Managing complex pipeline topologies
***************************************
A Simple example
***************************************
Use the **@follows(...)** python decorator before the function definitions::
from ruffus import *
import sys
def first_task():
print "First task"
@follows(first_task)
def second_task():
print "Second task"
@follows(second_task)
def final_task():
print "Final task"
the ``@follows`` decorator indicate that the ``first_task`` function precedes ``second_task`` in
the pipeline.
The canonical Ruffus decorator is ``@transform`` which **transforms** data flowing down a
computational pipeline from one stage to teh next.
********
Usage
********
Each stage or **task** in a computational pipeline is represented by a python function
Each python function can be called in parallel to run multiple **jobs**.
1. Import module::
import ruffus
1. Annotate functions with python decorators
2. Print dependency graph if you necessary
- For a graphical flowchart in ``jpg``, ``svg``, ``dot``, ``png``, ``ps``, ``gif`` formats::
pipeline_printout_graph ("flowchart.svg")
This requires ``dot`` to be installed
- For a text printout of all jobs ::
pipeline_printout(sys.stdout)
3. Run the pipeline::
pipeline_run()
""",
url='http://www.ruffus.org.uk',
download_url = "https://pypi.python.org/pypi/ruffus",
install_requires = module_dependencies, #['multiprocessing>=1.0', 'json' ], #, 'python>=2.5'],
setup_requires = module_dependencies, #['multiprocessing>=1.0', 'json'], #, 'python>=2.5'],
classifiers=[
'Intended Audience :: End Users/Desktop',
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'Intended Audience :: Information Technology',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'Topic :: System :: Distributed Computing',
'Topic :: Software Development :: Build Tools',
'Topic :: Software Development :: Build Tools',
'Topic :: Software Development :: Libraries',
'Environment :: Console',
],
license = "MIT",
keywords = "make task pipeline parallel bioinformatics science",
#packages = find_packages('src'), # include all packages under src
#package_dir = {'':'src'}, #packages are under src
packages=['ruffus'],
package_dir={'ruffus': 'ruffus'},
include_package_data = True, # include everything in source control
#package_data = {
# # If any package contains *.txt files, include them:
# '': ['*.TXT'], \
#}
)
#
# http://pypi.python.org/pypi
# http://docs.python.org/distutils/packageindex.html
#
#
#
# python setup.py register
# python setup.py sdist --format=gztar upload
|