File: execution.rst

package info (click to toggle)
fabric 0.9.1-1
  • links: PTS, VCS
  • area: main
  • in suites: squeeze
  • size: 972 kB
  • ctags: 996
  • sloc: python: 7,743; makefile: 7
file content (495 lines) | stat: -rw-r--r-- 18,611 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
===============
Execution model
===============

If you've read the :doc:`../tutorial`, you should already be familiar with how
Fabric operates in the base case (a single task on a single host.) However, in
many situations you'll find yourself wanting to execute multiple tasks and/or
on multiple hosts. Perhaps you want to split a big task into smaller reusable
parts, or crawl a collection of servers looking for an old user to remove. Such
a scenario requires specific rules for when and how tasks are executed.

This document explores Fabric's execution model, including the main execution
loop, how to define host lists, how connections are made, and so forth.

.. note::

    Most of this material applies to the :doc:`fab <fab>` tool only, as this
    mode of use has historically been the main focus of Fabric's development.
    When writing version 0.9 we straightened out Fabric's internals to make it
    easier to use as a library, but there's still work to be done before this
    is as flexible and easy as we'd like it to be.

.. _execution-strategy:

Execution strategy
==================

Fabric currently provides a single, serial execution method, though more
options are planned for the future:

* A list of tasks is created. Currently this list is simply the arguments given
  to :doc:`fab <fab>`, preserving the order given.
* For each task, a task-specific host list is generated from various
  sources (see :ref:`host-lists` below for details.)
* The task list is walked through in order, and each task is run once per host
  in its host list.
* Tasks with no hosts in their host list are considered local-only, and will
  always run once and only once.

Thus, given the following fabfile::

    from fabric.api import run, env

    env.hosts = ['host1', 'host2']

    def taskA():
        run('ls')

    def taskB():
        run('whoami')

and the following invocation::

    $ fab taskA taskB

you will see that Fabric performs the following:

* ``taskA`` executed on ``host1``
* ``taskA`` executed on ``host2``
* ``taskB`` executed on ``host1``
* ``taskB`` executed on ``host2``

While this approach is simplistic, it allows for a straightforward composition
of task functions, and (unlike tools which push the multi-host functionality
down to the individual function calls) enables shell script-like logic where
you may introspect the output or return code of a given command and decide what
to do next.


.. _tasks-and-imports:

Defining tasks
==============

When looking for tasks to execute, Fabric imports your fabfile and will
consider any callable object, **except** for the following:

* Callables whose name starts with an underscore (``_``). In other words,
  Python's usual "private" convention holds true here.
* Callables defined within Fabric itself. Fabric's own functions such as
  `~fabric.operations.run` and `~fabric.operations.sudo`  will not show up in
  your task list.

.. note::

    To see exactly which callables in your fabfile may be executed via ``fab``,
    use :option:`fab --list <-l>`.

Imports
-------

Python's ``import`` statement effectively includes the imported objects in your
module's namespace. Since Fabric's fabfiles are just Python modules, this means
that imports are also considered as possible tasks, alongside anything defined
in the fabfile itself.

Because of this, we strongly recommend that you use the ``import module`` form
of importing, followed by ``module.callable()``, which will result in a cleaner
fabfile API than doing ``from module import callable``.

For example, here's a sample fabfile which uses ``urllib.urlopen`` to get some
data out of a webservice::

    from urllib import urlopen

    from fabric.api import run

    def webservice_read():
        objects = urlopen('http://my/web/service/?foo=bar').read().split()
        print(objects)

This looks simple enough, and will run without error. However, look what
happens if we run :option:`fab --list <-l>` on this fabfile::

    $ fab --list
    Available commands:

      my_task    List some directories.   
      urlopen    urlopen(url [, data]) -> open file-like object

Our fabfile of only one task is showing two "tasks", which is bad enough, and
an unsuspecting user might accidentally try to call ``fab urlopen``, which
probably won't work very well. Imagine any real-world fabfile, which is likely
to be much more complex, and hopefully you can see how this could get messy
fast.

For reference, here's the recommended way to do it::

    import urllib

    from fabric.api import run

    def webservice_read():
        objects = urllib.urlopen('http://my/web/service/?foo=bar').read().split()
        print(objects)

It's a simple change, but it'll make anyone using your fabfile a bit happier.


Defining host lists
===================

Unless you're using Fabric as a simple build system (which is possible, but not
the primary use-case) having tasks won't do you any good without the ability to
specify remote hosts on which to execute them. There are a number of ways to do
so, with scopes varying from global to per-task, and it's possible mix and
match as needed.

Hosts
-----

Hosts, in this context, refer to what are also called "host strings": Python
strings specifying a username, hostname and port combination, in the form of
``username@hostname:port``. User and/or port (and the associated ``@`` or
``:``) may be omitted, and will be filled by the executing user's local
username, and/or port 22, respectively. Thus, ``admin@foo.com:222``,
``deploy@website`` and ``nameserver1`` could all be valid host strings.

In other words, Fabric expects the same format as the command-line ``ssh``
program.

During execution, Fabric normalizes the host strings given and then stores each
part (username/hostname/port) in the environment dictionary, for both its use
and for tasks to reference if the need arises. See :doc:`env` for details.

Roles
-----

Host strings map to single hosts, but sometimes it's useful to arrange hosts in
groups. Perhaps you have a number of Web servers behind a load balancer and
want to update all of them, or want to run a task on "all client servers".
Roles provide a way of defining strings which correspond to lists of host
strings, and can then be specified instead of writing out the entire list every
time.

This mapping is defined as a dictionary, ``env.roledefs``, which must be
modified by a fabfile in order to be used. A simple example::

    from fabric.api import env

    env.roledefs['webservers'] = ['www1', 'www2', 'www3']

Since ``env.roledefs`` is naturally empty by default, you may also opt to
re-assign to it without fear of losing any information (provided you aren't
loading other fabfiles which also modify it, of course)::

    from fabric.api import env

    env.roledefs = {
        'web': ['www1', 'www2', 'www3'],
        'dns': ['ns1', 'ns2']
    }

Use of roles is not required in any way -- it's simply a convenience in
situations where you have common groupings of servers.

.. _host-lists:

How host lists are constructed
------------------------------

There are a number of ways to specify host lists, either globally or per-task,
and generally these methods override one another instead of merging together
(though this may change in future releases.) Each such method is typically
split into two parts, one for hosts and one for roles.

Globally, via ``env``
~~~~~~~~~~~~~~~~~~~~~

The most common method of setting hosts or roles is by modifying two key-value
pairs in the environment dictionary, :doc:`env <env>`: ``hosts`` and ``roles``.
The value of these variables is checked at runtime, while constructing each
tasks's host list.

Thus, they may be set at module level, which will take effect when the fabfile
is imported::

    from fabric.api import env, run

    env.hosts = ['host1', 'host2']

    def mytask():
        run('ls /var/www')

Such a fabfile, run simply as ``fab mytask``, will run ``mytask`` on ``host1``
followed by ``host2``.

Since the env vars are checked for *each* host, this means that if you have the
need, you can actually modify ``env`` in one task and it will affect all
following tasks::

    from fabric.api import env, run

    def set_hosts():
        env.hosts = ['host1', 'host2']

    def mytask():
        run('ls /var/www')

When run as ``fab set_hosts mytask``, ``set_hosts`` is a "local" task -- its
own host list is empty -- but ``mytask`` will again run on the two hosts given.

.. note::

    This technique used to be a common way of creating fake "roles", but is
    less necessary now that roles are fully implemented. It may still be useful
    in some situations, however.

Alongside ``env.hosts`` is ``env.roles`` (not to be confused with
``env.roledefs``!) which, if given, will be taken as a list of role names to
look up in ``env.roledefs``.

Globally, via the command line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In addition to modifying ``env.hosts`` and ``env.roles`` at the module level,
you may define them by passing comma-separated string arguments to the
command-line switches :option:`--hosts/-H <-H>` and :option:`--roles/-R <-R>`,
e.g.::

    $ fab -H host1,host2 mytask

Such an invocation is directly equivalent to ``env.hosts = ['host1', 'host2']``
-- the argument parser knows to look for these arguments and will modify
``env`` at parse time.

.. note::

    It's possible, and in fact common, to use these switches to set only a
    single host or role. Fabric simply calls ``string.split(',')`` on the given
    string, so a string with no commas turns into a single-item list.

It is important to know that these command-line switches are interpreted
**before** your fabfile is loaded: any reassignment to ``env.hosts`` or
``env.roles`` in your fabfile will overwrite them.

If you wish to nondestructively merge the command-line hosts with your
fabfile-defined ones, make sure your fabfile uses ``env.hosts.extend()``
instead::

    from fabric.api import env, run

    env.hosts.extend(['host3', 'host4'])

    def mytask():
        run('ls /var/www')

When this fabfile is run as ``fab -H host1,host2 mytask``, ``env.hosts`` will
end contain ``['host1', 'host2', 'host3', 'host4']`` at the time that
``mytask`` is executed.

.. note::

    ``env.hosts`` is simply a Python list object -- so you may use
    ``env.hosts.append()`` or any other such method you wish.

.. _hosts-per-task-cli:

Per-task, via the command line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Globally setting host lists only works if you want all your tasks to run on the
same host list all the time. This isn't always true, so Fabric provides a few
ways to be more granular and specify host lists which apply to a single task
only. The first of these uses task arguments.

As outlined in :doc:`fab`, it's possible to specify per-task arguments via a
special command-line syntax. In addition to naming actual arguments to your
task function, this may be used to set the ``host``, ``hosts``, ``role`` or
``roles`` "arguments", which are interpreted by Fabric when building host lists
(and removed from the arguments passed to the task itself.)

.. note::

    Since commas are already used to separate task arguments from one another,
    semicolons must be used in the ``hosts`` or ``roles`` arguments to
    delineate individual host strings or role names. Furthermore, the argument
    must be quoted to prevent your shell from interpreting the semicolons.

Take the below fabfile, which is the same one we've been using, but which
doesn't define any host info at all::

    from fabric.api import run

    def mytask():
        run('ls /var/www')

To specify per-task hosts for ``mytask``, execute it like so::

    $ fab mytask:hosts="host1;host2"

This will override any other host list and ensure ``mytask`` always runs on
just those two hosts.

Per-task, via decorators
~~~~~~~~~~~~~~~~~~~~~~~~

If a given task should always run on a predetermined host list, you may wish to
specify this in your fabfile itself. This can be done by decorating a task
function with the `~fabric.decorators.hosts` or `~fabric.decorators.roles`
decorators. These decorators take a variable argument list, like so::

    from fabric.api import hosts, run

    @hosts('host1', 'host2')
    def mytask():
        run('ls /var/www')

When used, they override any checks of ``env`` for that particular task's host
list (though ``env`` is not modified in any way -- it is simply ignored.) Thus,
even if the above fabfile had defined ``env.hosts`` or the call to :doc:`fab
<fab>` uses :option:`--hosts/-H <-H>`, ``mytask`` would still run on a host list of
``['host1', 'host2']``.

However, decorator host lists do **not** override per-task command-line
arguments, as given in the previous section.

Order of precedence
~~~~~~~~~~~~~~~~~~~

We've been pointing out which methods of setting host lists trump the others,
as we've gone along. However, to make things clearer, here's a quick breakdown:

* Per-task, command-line host lists (``fab mytask:host=host1``) override
  absolutely everything else.
* Per-task, decorator-specified host lists (``@hosts('host1')``) override the
  ``env`` variables.
* Globally specified host lists set in the fabfile (``env.hosts = ['host1']``)
  *can* override such lists set on the command-line, but only if you're not
  careful (or want them to.)
* Globally specified host lists set on the command-line (``--hosts=host1``)
  will initialize the ``env`` variables, but that's it.

This logic may change slightly in the future to be more consistent (e.g.
having :option:`--hosts <-H>` somehow take precedence over ``env.hosts`` in the
same way that command-line per-task lists trump in-code ones) but only in a
backwards-incompatible release.

.. _combining-host-lists:

Combining host lists
--------------------

There is no "unionizing" of hosts between the various sources mentioned in
:ref:`host-lists`. If ``env.hosts`` is set to ``['host1', 'host2', 'host3']``,
and a per-function (e.g.  via `~fabric.decorators.hosts`) host list is set to
just ``['host2', 'host3']``, that function will **not** execute on ``host1``,
because the per-task decorator host list takes precedence.

However, for each given source, if both roles **and** hosts are specified, they
will be merged together into a single host list. Take, for example, this
fabfile where both of the decorators are used::

    from fabric.api import env, hosts, roles, run

    env.roledefs = {'role1': ['b', 'c']}

    @hosts('a', 'b')
    @roles('role1')
    def mytask():
        run('ls /var/www')

Assuming no command-line hosts or roles are given when ``mytask`` is executed,
this fabfile will call ``mytask`` on a host list of ``['a', 'b', 'c']`` -- the
union of ``role1`` and the contents of the `~fabric.decorators.hosts` call.


.. _failures:

Failure handling
================

Once the task list has been constructed, Fabric will start executing them as
outlined in :ref:`execution-strategy`, until all tasks have been run on the
entirety of their host lists. However, Fabric defaults to a "fail-fast"
behavior pattern: if anything goes wrong, such as a remote program returning a
nonzero return value or your fabfile's Python code encountering an exception,
execution will halt immediately.

This is typically the desired behavior, but there are many exceptions to the
rule, so Fabric provides ``env.warn_only``, a Boolean setting. It defaults to
``False``, meaning an error condition will result in the program aborting
immediately. However, if ``env.warn_only`` is set to ``True`` at the time of
failure -- with, say, the `~fabric.context_managers.settings` context
manager -- Fabric will emit a warning message but continue executing.


Connections
===========

``fab`` itself doesn't actually make any connections to remote hosts. Instead,
it simply ensures that for each distinct run of a task on one of its hosts, the
env var ``env.host_string`` is set to the right value. Users wanting to
leverage Fabric as a library may do so manually to achieve similar effects.

``env.host_string`` is (as the name implies) the "current" host string, and is
what Fabric uses to determine what connections to make (or re-use) when
network-aware functions are run. Operations like `~fabric.operations.run` or
`~fabric.operations.put` use ``env.host_string`` as a lookup key in a shared
dictionary which maps host strings to SSH connection objects.

.. note::

    The connections dictionary (currently located at
    ``fabric.state.connections``) acts as a cache, opting to return previously
    created connections if possible in order to save some overhead, and
    creating new ones otherwise.


Lazy connections
----------------

Because connections are driven by the individual operations, Fabric will not
actually make connections until they're necessary. Take for example this task
which does some local housekeeping prior to interacting with the remote
server::

    from fabric.api import *

    @hosts('host1')
    def clean_and_upload():
        local('find assets/ -name "*.DS_Store" -exec rm '{}' \;')
        local('tar czf /tmp/assets.tgz assets/')
        put('/tmp/assets.tgz', '/tmp/assets.tgz')
        with cd('/var/www/myapp/'):
            run('tar xzf /tmp/assets.tgz')

What happens, connection-wise, is as follows:

#. The two `~fabric.operations.local` calls will run without making any network
   connections whatsoever;
#. `~fabric.operations.put` asks the connection cache for a connection to
   ``host1``;
#. The connection cache fails to find an existing connection for that host
   string, and so creates a new SSH connection, returning it to
   `~fabric.operations.put`;
#. `~fabric.operations.put` uploads the file through that connection;
#. Finally, the `~fabric.operations.run` call asks the cache for a connection
   to that same host string, and is given the existing, cached connection for
   its own use.

Extrapolating from this, you can also see that tasks which don't use any
network-borne operations will never actually initiate any connections (though
they will still be run once for each host in their host list, if any.)

Closing connections
-------------------

Fabric's connection cache never closes connections itself -- it leaves this up
to whatever is using it. The :doc:`fab <fab>` tool does this bookkeeping for
you: it iterates over all open connections and closes them just before it exits
(regardless of whether the tasks failed or not.)

Library users will need to ensure they explicitly close all open connections
before their program exits, though we plan to makes this easier in the future.