1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269
|
Author: Michael R. Crusoe <crusoe@debian.org>
Description: Update docs to reflect a local install
Forwarded: not-needed
--- toil.orig/docs/index.rst
+++ toil/docs/index.rst
@@ -31,7 +31,6 @@
:caption: Getting Started
:maxdepth: 2
- gettingStarted/install
gettingStarted/quickStart
.. toctree::
--- toil.orig/docs/gettingStarted/quickStart.rst
+++ toil/docs/gettingStarted/quickStart.rst
@@ -8,22 +8,15 @@
Running a basic workflow
------------------------
-A Toil workflow can be run with just three steps:
+A Toil workflow can be run with just two steps:
-1. Install Toil (see :ref:`installation-ref`)
-
-2. Copy and paste the following code block into a new file called ``helloWorld.py``:
+1. Copy and paste the following code block into a new file called ``helloWorld.py``:
.. literalinclude:: ../../src/toil/test/docs/scripts/tutorial_helloworld.py
-3. Specify the name of the :ref:`job store <jobStoreOverview>` and run the workflow::
-
- (venv) $ python helloWorld.py file:my-job-store
-
-.. note::
+2. Specify the name of the :ref:`job store <jobStoreOverview>` and run the workflow::
- Don't actually type ``(venv) $`` in at the beginning of each command. This is intended only to remind the user that
- they should have their :ref:`virtual environment <venvPrep>` running.
+ python helloWorld.py file:my-job-store
Congratulations! You've run your first Toil workflow using the default :ref:`Batch System <batchsysteminterface>`, ``singleMachine``,
using the ``file`` job store.
@@ -50,13 +43,6 @@
workflows that are portable across multiple workflow engines and platforms.
Running CWL workflows using Toil is easy.
-#. First ensure that Toil is installed with the
- ``cwl`` extra (see :ref:`extras`)::
-
- (venv) $ pip install 'toil[cwl]'
-
- This installs the ``toil-cwl-runner`` executable.
-
#. Copy and paste the following code block into ``example.cwl``:
.. code-block:: yaml
@@ -82,11 +68,11 @@
#. To run the workflow simply enter ::
- (venv) $ toil-cwl-runner example.cwl example-job.yaml
+ $ toil-cwl-runner example.cwl example-job.yaml
Your output will be in ``output.txt``::
- (venv) $ cat output.txt
+ $ cat output.txt
Hello world!
To learn more about CWL, see the `CWL User Guide`_ (from where this example was
@@ -104,13 +90,6 @@
The `Workflow Description Language`_ (WDL) is another emerging language for writing workflows that are portable across multiple workflow engines and platforms.
Running WDL workflows using Toil is still in alpha, and currently experimental. Toil currently supports basic workflow syntax (see :ref:`wdl` for more details and examples). Here we go over running a basic WDL helloworld workflow.
-#. First ensure that Toil is installed with the
- ``wdl`` extra (see :ref:`extras`)::
-
- (venv) $ pip install 'toil[wdl]'
-
- This installs the ``toil-wdl-runner`` executable.
-
#. Copy and paste the following code block into ``wdl-helloworld.wdl``::
workflow write_simple_file {
@@ -130,11 +109,11 @@
#. To run the workflow simply enter ::
- (venv) $ toil-wdl-runner wdl-helloworld.wdl wdl-helloworld.json
+ $ toil-wdl-runner wdl-helloworld.wdl wdl-helloworld.json
Your output will be in ``wdl-helloworld-output.txt``::
- (venv) $ cat wdl-helloworld-output.txt
+ $ cat wdl-helloworld-output.txt
Hello world!
To learn more about WDL, see the main `WDL website`_ .
@@ -161,7 +140,7 @@
#. Run it with the default settings::
- (venv) $ python sort.py file:jobStore
+ $ python sort.py file:jobStore
The workflow created a file called ``sortedFile.txt`` in your current directory.
Have a look at it and notice that it contains a whole lot of sorted lines!
@@ -178,7 +157,7 @@
3. Run with custom options::
- (venv) $ python sort.py file:jobStore \
+ $ python sort.py file:jobStore \
--numLines=5000 \
--lineLength=10 \
--overwriteOutput=True \
@@ -297,7 +276,7 @@
with the ``--logLevel`` flag. For example, to only log ``CRITICAL`` level
messages to the screen::
- (venv) $ python sort.py file:jobStore \
+ $ python sort.py file:jobStore \
--logLevel=critical \
--overwriteOutput=True
@@ -323,7 +302,7 @@
When we run the pipeline, Toil will show a detailed failure log with a traceback::
- (venv) $ python sort.py file:jobStore
+ $ python sort.py file:jobStore
...
---TOIL WORKER OUTPUT LOG---
...
@@ -345,13 +324,13 @@
failure, the job store is preserved so that the workflow can be restarted,
starting from the previously failed jobs. We can restart the pipeline by running ::
- (venv) $ python sort.py file:jobStore \
+ $ python sort.py file:jobStore \
--restart \
--overwriteOutput=True
We can also change the number of times Toil will attempt to retry a failed job::
- (venv) $ python sort.py file:jobStore \
+ $ python sort.py file:jobStore \
--retryCount 2 \
--restart \
--overwriteOutput=True
@@ -365,7 +344,7 @@
::
- (venv) $ python sort.py file:jobStore \
+ $ python sort.py file:jobStore \
--restart \
--overwriteOutput=True
@@ -393,7 +372,7 @@
#. Launch a cluster in AWS using the :ref:`launchCluster` command::
- (venv) $ toil launch-cluster <cluster-name> \
+ $ toil launch-cluster <cluster-name> \
--keyPairName <AWS-key-pair-name> \
--leaderNodeType t2.medium \
--zone us-west-2a
@@ -402,13 +381,13 @@
#. Copy ``helloWorld.py`` to the ``/tmp`` directory on the leader node using the :ref:`rsyncCluster` command::
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> helloWorld.py :/tmp
+ $ toil rsync-cluster --zone us-west-2a <cluster-name> helloWorld.py :/tmp
Note that the command requires defining the file to copy as well as the target location on the cluster leader node.
#. Login to the cluster leader node using the :ref:`sshCluster` command::
- (venv) $ toil ssh-cluster --zone us-west-2a <cluster-name>
+ $ toil ssh-cluster --zone us-west-2a <cluster-name>
Note that this command will log you in as the ``root`` user.
@@ -429,7 +408,7 @@
#. Use the :ref:`destroyCluster` command to destroy the cluster::
- (venv) $ toil destroy-cluster --zone us-west-2a <cluster-name>
+ $ toil destroy-cluster --zone us-west-2a <cluster-name>
Note that this command will destroy the cluster leader
node and any resources created to run the job, including the S3 bucket.
@@ -447,7 +426,7 @@
#. First launch a node in AWS using the :ref:`launchCluster` command::
- (venv) $ toil launch-cluster <cluster-name> \
+ $ toil launch-cluster <cluster-name> \
--keyPairName <AWS-key-pair-name> \
--leaderNodeType t2.medium \
--zone us-west-2a
@@ -455,12 +434,12 @@
#. Copy ``example.cwl`` and ``example-job.yaml`` from the :ref:`CWL example <cwlquickstart>` to the node using
the :ref:`rsyncCluster` command::
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example.cwl :/tmp
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example-job.yaml :/tmp
+ toil rsync-cluster --zone us-west-2a <cluster-name> example.cwl :/tmp
+ toil rsync-cluster --zone us-west-2a <cluster-name> example-job.yaml :/tmp
#. SSH into the cluster's leader node using the :ref:`sshCluster` utility::
- (venv) $ toil ssh-cluster --zone us-west-2a <cluster-name>
+ $ toil ssh-cluster --zone us-west-2a <cluster-name>
#. Once on the leader node, it's a good idea to update and install the following::
@@ -490,7 +469,7 @@
#. Finally, log out of the leader node and from your local computer, destroy the cluster::
- (venv) $ toil destroy-cluster --zone us-west-2a <cluster-name>
+ $ toil destroy-cluster --zone us-west-2a <cluster-name>
.. _awscactus:
@@ -544,11 +523,11 @@
When using AWS, setting the environment variable eliminates having to specify the ``--zone`` option
for each command. This will be supported for GCE in the future. ::
- (venv) $ export TOIL_AWS_ZONE=us-west-2c
+ $ export TOIL_AWS_ZONE=us-west-2c
#. Create appropriate directory for uploading files::
- (venv) $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
+ $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
$ mkdir /root/cact_ex
$ exit
@@ -557,18 +536,18 @@
`here <https://github.com/ComparativeGenomicsToolkit/cactus#seqfile-the-input-file>`__), organisms' genome sequence
files in FASTA format, and configuration files (e.g. blockTrim1.xml, if desired), up to the leader node::
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> pestis-short-aws-seqFile.txt :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000169655.1_ASM16965v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000006645.1_ASM664v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000182485.1_ASM18248v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000013805.1_ASM1380v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> setup_leaderNode.sh :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim1.xml :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim3.xml :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> pestis-short-aws-seqFile.txt :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000169655.1_ASM16965v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000006645.1_ASM664v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000182485.1_ASM18248v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000013805.1_ASM1380v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> setup_leaderNode.sh :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim1.xml :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim3.xml :/root/cact_ex
#. Log in to the leader node::
- (venv) $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
+ $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
#. Set up the environment of the leader node to run Cactus::
|