1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
|
From: "Michael R. Crusoe" <crusoe@debian.org>
Date: Mon, 7 Jan 2019 02:43:15 -0800
Subject: Update docs to reflect a local install
Forwarded: not-needed
Last-Update: 2024-01-23
---
docs/gettingStarted/quickStart.rst | 85 +++++++++++++++-----------------------
docs/index.rst | 1 -
2 files changed, 33 insertions(+), 53 deletions(-)
diff --git a/docs/gettingStarted/quickStart.rst b/docs/gettingStarted/quickStart.rst
index 7a4dcf2..fbaeb41 100644
--- a/docs/gettingStarted/quickStart.rst
+++ b/docs/gettingStarted/quickStart.rst
@@ -13,18 +13,6 @@ The `Common Workflow Language`_ (CWL) is an emerging standard for writing
workflows that are portable across multiple workflow engines and platforms.
Running CWL workflows using Toil is easy.
-#. First ensure that Toil is installed with the
- ``cwl`` extra (see :ref:`extras`)::
-
- (venv) $ pip install 'toil[cwl]'
-
- This installs the ``toil-cwl-runner`` executable.
-
- .. note::
-
- Don't actually type ``(venv) $`` in at the beginning of each command. This is intended only to remind the user that
- they should have their :ref:`virtual environment <venvPrep>` running.
-
#. Copy and paste the following code block into ``example.cwl``:
.. code-block:: yaml
@@ -50,11 +38,11 @@ Running CWL workflows using Toil is easy.
#. To run the workflow simply enter ::
- (venv) $ toil-cwl-runner example.cwl example-job.yaml
+ $ toil-cwl-runner example.cwl example-job.yaml
Your output will be in ``output.txt``::
- (venv) $ cat output.txt
+ $ cat output.txt
Hello world!
@@ -84,13 +72,6 @@ Running a basic WDL workflow
The `Workflow Description Language`_ (WDL) is another emerging language for writing workflows that are portable across multiple workflow engines and platforms.
Running WDL workflows using Toil is still in alpha, and currently experimental. Toil currently supports basic workflow syntax (see :ref:`wdl` for more details and examples). Here we go over running a basic WDL helloworld workflow.
-#. First ensure that Toil is installed with the
- ``wdl`` extra (see :ref:`extras`)::
-
- (venv) $ pip install 'toil[wdl]'
-
- This installs the ``toil-wdl-runner`` executable.
-
#. Copy and paste the following code block into ``wdl-helloworld.wdl``::
workflow write_simple_file {
@@ -110,11 +91,11 @@ Running WDL workflows using Toil is still in alpha, and currently experimental.
#. To run the workflow simply enter ::
- (venv) $ toil-wdl-runner wdl-helloworld.wdl wdl-helloworld.json
+ $ toil-wdl-runner wdl-helloworld.wdl wdl-helloworld.json
Your output will be in ``wdl-helloworld-output.txt``::
- (venv) $ cat wdl-helloworld-output.txt
+ $ cat wdl-helloworld-output.txt
Hello world!
This will, like the CWL example above, use the ``single_machine`` batch system
@@ -144,7 +125,7 @@ An example Toil Python workflow can be run with just three steps:
3. Specify the name of the :ref:`job store <jobStoreOverview>` and run the workflow::
- (venv) $ python3 helloWorld.py file:my-job-store
+ $ python3 helloWorld.py file:my-job-store
For something beyond a "Hello, world!" example, refer to :ref:`runningDetail`.
@@ -169,7 +150,7 @@ Running the example
#. Run it with the default settings::
- (venv) $ python3 sort.py file:jobStore
+ $ python3 sort.py file:jobStore
The workflow created a file called ``sortedFile.txt`` in your current directory.
Have a look at it and notice that it contains a whole lot of sorted lines!
@@ -186,7 +167,7 @@ Running the example
3. Run with custom options::
- (venv) $ python3 sort.py file:jobStore \
+ $ python3 sort.py file:jobStore \
--numLines=5000 \
--lineLength=10 \
--overwriteOutput=True \
@@ -305,7 +286,7 @@ in addition to messages from the batch system and jobs. This can be configured
with the ``--logLevel`` flag. For example, to only log ``CRITICAL`` level
messages to the screen::
- (venv) $ python3 sort.py file:jobStore \
+ $ python3 sort.py file:jobStore \
--logLevel=critical \
--overwriteOutput=True
@@ -331,7 +312,7 @@ example (the first line of ``down()``):
When we run the pipeline, Toil will show a detailed failure log with a traceback::
- (venv) $ python3 sort.py file:jobStore
+ $ python3 sort.py file:jobStore
...
---TOIL WORKER OUTPUT LOG---
...
@@ -353,13 +334,13 @@ that a job store of the same name already exists. By default, in the event of a
failure, the job store is preserved so that the workflow can be restarted,
starting from the previously failed jobs. We can restart the pipeline by running ::
- (venv) $ python3 sort.py file:jobStore \
+ $ python3 sort.py file:jobStore \
--restart \
--overwriteOutput=True
We can also change the number of times Toil will attempt to retry a failed job::
- (venv) $ python3 sort.py file:jobStore \
+ $ python3 sort.py file:jobStore \
--retryCount 2 \
--restart \
--overwriteOutput=True
@@ -373,7 +354,7 @@ line 30, or remove it, and then run
::
- (venv) $ python3 sort.py file:jobStore \
+ $ python3 sort.py file:jobStore \
--restart \
--overwriteOutput=True
@@ -401,7 +382,7 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Launch a cluster in AWS using the :ref:`launchCluster` command::
- (venv) $ toil launch-cluster <cluster-name> \
+ $ toil launch-cluster <cluster-name> \
--clusterType kubernetes \
--keyPairName <AWS-key-pair-name> \
--leaderNodeType t2.medium \
@@ -412,13 +393,13 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Copy ``helloWorld.py`` to the ``/tmp`` directory on the leader node using the :ref:`rsyncCluster` command::
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> helloWorld.py :/tmp
+ $ toil rsync-cluster --zone us-west-2a <cluster-name> helloWorld.py :/tmp
Note that the command requires defining the file to copy as well as the target location on the cluster leader node.
#. Login to the cluster leader node using the :ref:`sshCluster` command::
- (venv) $ toil ssh-cluster --zone us-west-2a <cluster-name>
+ $ toil ssh-cluster --zone us-west-2a <cluster-name>
Note that this command will log you in as the ``root`` user.
@@ -439,7 +420,7 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Use the :ref:`destroyCluster` command to destroy the cluster::
- (venv) $ toil destroy-cluster --zone us-west-2a <cluster-name>
+ $ toil destroy-cluster --zone us-west-2a <cluster-name>
Note that this command will destroy the cluster leader
node and any resources created to run the job, including the S3 bucket.
@@ -457,7 +438,7 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. First launch a node in AWS using the :ref:`launchCluster` command::
- (venv) $ toil launch-cluster <cluster-name> \
+ $ toil launch-cluster <cluster-name> \
--clusterType kubernetes \
--keyPairName <AWS-key-pair-name> \
--leaderNodeType t2.medium \
@@ -467,12 +448,12 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Copy ``example.cwl`` and ``example-job.yaml`` from the :ref:`CWL example <cwlquickstart>` to the node using
the :ref:`rsyncCluster` command::
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example.cwl :/tmp
- (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example-job.yaml :/tmp
+ toil rsync-cluster --zone us-west-2a <cluster-name> example.cwl :/tmp
+ toil rsync-cluster --zone us-west-2a <cluster-name> example-job.yaml :/tmp
#. SSH into the cluster's leader node using the :ref:`sshCluster` utility::
- (venv) $ toil ssh-cluster --zone us-west-2a <cluster-name>
+ $ toil ssh-cluster --zone us-west-2a <cluster-name>
#. Once on the leader node, command line tools such as ``kubectl`` will be available to you. It's also a good idea to
update and install the following::
@@ -503,7 +484,7 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Finally, log out of the leader node and from your local computer, destroy the cluster::
- (venv) $ toil destroy-cluster --zone us-west-2a <cluster-name>
+ $ toil destroy-cluster --zone us-west-2a <cluster-name>
.. _awscactus:
@@ -543,7 +524,7 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
#. Launch a cluster using the :ref:`launchCluster` command::
- (venv) $ toil launch-cluster <cluster-name> \
+ $ toil launch-cluster <cluster-name> \
--provisioner <aws, gce> \
--keyPairName <key-pair-name> \
--leaderNodeType <type> \
@@ -559,11 +540,11 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
When using AWS, setting the environment variable eliminates having to specify the ``--zone`` option
for each command. This will be supported for GCE in the future. ::
- (venv) $ export TOIL_AWS_ZONE=us-west-2c
+ $ export TOIL_AWS_ZONE=us-west-2c
#. Create appropriate directory for uploading files::
- (venv) $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
+ $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
$ mkdir /root/cact_ex
$ exit
@@ -572,18 +553,18 @@ Also! Remember to use the :ref:`destroyCluster` command when finished to destro
`here <https://github.com/ComparativeGenomicsToolkit/cactus#seqfile-the-input-file>`__), organisms' genome sequence
files in FASTA format, and configuration files (e.g. blockTrim1.xml, if desired), up to the leader node::
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> pestis-short-aws-seqFile.txt :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000169655.1_ASM16965v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000006645.1_ASM664v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000182485.1_ASM18248v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000013805.1_ASM1380v1_genomic.fna :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> setup_leaderNode.sh :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim1.xml :/root/cact_ex
- (venv) $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim3.xml :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> pestis-short-aws-seqFile.txt :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000169655.1_ASM16965v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000006645.1_ASM664v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000182485.1_ASM18248v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> GCF_000013805.1_ASM1380v1_genomic.fna :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> setup_leaderNode.sh :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim1.xml :/root/cact_ex
+ $ toil rsync-cluster --provisioner <aws, gce> <cluster-name> blockTrim3.xml :/root/cact_ex
#. Log in to the leader node::
- (venv) $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
+ $ toil ssh-cluster --provisioner <aws, gce> <cluster-name>
#. Set up the environment of the leader node to run Cactus::
diff --git a/docs/index.rst b/docs/index.rst
index 02e32f0..7c30b13 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -29,7 +29,6 @@ If using Toil for your research, please cite
.. toctree::
:caption: Getting Started
- gettingStarted/install
gettingStarted/quickStart
.. toctree::
|