File: workers.rst

package info (click to toggle)
celery 5.5.3-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 8,008 kB
  • sloc: python: 64,346; sh: 795; makefile: 378
file content (1245 lines) | stat: -rw-r--r-- 42,182 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
.. _guide-workers:

===============
 Workers Guide
===============

.. contents::
    :local:
    :depth: 1

.. _worker-starting:

Starting the worker
===================

.. sidebar:: Daemonizing

    You probably want to use a daemonization tool to start
    the worker in the background. See :ref:`daemonizing` for help
    starting the worker as a daemon using popular service managers.

You can start the worker in the foreground by executing the command:

.. code-block:: console

    $ celery -A proj worker -l INFO

For a full list of available command-line options see
:mod:`~celery.bin.worker`, or simply do:

.. code-block:: console

    $ celery worker --help

You can start multiple workers on the same machine, but
be sure to name each individual worker by specifying a
node name with the :option:`--hostname <celery worker --hostname>` argument:

.. code-block:: console

    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1@%h
    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2@%h
    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker3@%h

The ``hostname`` argument can expand the following variables:

    - ``%h``:  Hostname, including domain name.
    - ``%n``:  Hostname only.
    - ``%d``:  Domain name only.

If the current hostname is *george.example.com*, these will expand to:

+----------+----------------+------------------------------+
| Variable | Template       | Result                       |
+----------+----------------+------------------------------+
| ``%h``   | ``worker1@%h`` | *worker1@george.example.com* |
+----------+----------------+------------------------------+
| ``%n``   | ``worker1@%n`` | *worker1@george*             |
+----------+----------------+------------------------------+
| ``%d``   | ``worker1@%d`` | *worker1@example.com*        |
+----------+----------------+------------------------------+

.. admonition:: Note for :pypi:`supervisor` users

   The ``%`` sign must be escaped by adding a second one: `%%h`.

.. _worker-stopping:

Stopping the worker
===================

Shutdown should be accomplished using the :sig:`TERM` signal.

When shutdown is initiated the worker will finish all currently executing
tasks before it actually terminates. If these tasks are important, you should
wait for it to finish before doing anything drastic, like sending the :sig:`KILL`
signal.

If the worker won't shutdown after considerate time, for being
stuck in an infinite-loop or similar, you can use the :sig:`KILL` signal to
force terminate the worker: but be aware that currently executing tasks will
be lost (i.e., unless the tasks have the :attr:`~@Task.acks_late`
option set).

Also as processes can't override the :sig:`KILL` signal, the worker will
not be able to reap its children; make sure to do so manually. This
command usually does the trick:

.. code-block:: console

    $ pkill -9 -f 'celery worker'

If you don't have the :command:`pkill` command on your system, you can use the slightly
longer version:

.. code-block:: console

    $ ps auxww | awk '/celery worker/ {print $2}' | xargs kill -9

.. versionchanged:: 5.2
    On Linux systems, Celery now supports sending :sig:`KILL` signal to all child processes
    after worker termination. This is done via `PR_SET_PDEATHSIG` option of ``prctl(2)``.

.. _worker_shutdown:

Worker Shutdown
---------------

We will use the terms *Warm, Soft, Cold, Hard* to describe the different stages of worker shutdown.
The worker will initiate the shutdown process when it receives the :sig:`TERM` or :sig:`QUIT` signal.
The :sig:`INT` (Ctrl-C) signal is also handled during the shutdown process and always triggers the 
next stage of the shutdown process.

.. _worker-warm-shutdown:

Warm Shutdown
~~~~~~~~~~~~~

When the worker receives the :sig:`TERM` signal, it will initiate a warm shutdown. The worker will
finish all currently executing tasks before it actually terminates. The first time the worker receives
the :sig:`INT` (Ctrl-C) signal, it will initiate a warm shutdown as well.

The warm shutdown will stop the call to :func:`WorkController.start() <celery.worker.worker.WorkController.start>`
and will call :func:`WorkController.stop() <celery.worker.worker.WorkController.stop>`.

- Additional :sig:`TERM` signals will be ignored during the warm shutdown process.
- The next :sig:`INT` signal will trigger the next stage of the shutdown process.

.. _worker-cold-shutdown:

Cold Shutdown
~~~~~~~~~~~~~

Cold shutdown is initiated when the worker receives the :sig:`QUIT` signal. The worker will stop
all currently executing tasks and terminate immediately.

.. _worker-REMAP_SIGTERM:

.. note::

    If the environment variable ``REMAP_SIGTERM`` is set to ``SIGQUIT``, the worker will also initiate
    a cold shutdown when it receives the :sig:`TERM` signal instead of a warm shutdown.

The cold shutdown will stop the call to :func:`WorkController.start() <celery.worker.worker.WorkController.start>`
and will call :func:`WorkController.terminate() <celery.worker.worker.WorkController.terminate>`.

If the warm shutdown already started, the transition to cold shutdown will run a signal handler ``on_cold_shutdown``
to cancel all currently executing tasks from the MainProcess and potentially trigger the :ref:`worker-soft-shutdown`.

.. _worker-soft-shutdown:

Soft Shutdown
~~~~~~~~~~~~~

.. versionadded:: 5.5

Soft shutdown is a time limited warm shutdown, initiated just before the cold shutdown. The worker will
allow :setting:`worker_soft_shutdown_timeout` seconds for all currently executing tasks to finish before
it terminates. If the time limit is reached, the worker will initiate a cold shutdown and cancel all currently
executing tasks. If the :sig:`QUIT` signal is received during the soft shutdown, the worker will cancel all
currently executing tasks but still wait for the time limit to finish before terminating, giving a chance for
the worker to perform the cold shutdown a little more gracefully.

The soft shutdown is disabled by default to maintain backward compatibility with the :ref:`worker-cold-shutdown`
behavior. To enable the soft shutdown, set :setting:`worker_soft_shutdown_timeout` to a positive float value.
The soft shutdown will be skipped if there are no tasks running. To force the soft shutdown, *also* enable the
:setting:`worker_enable_soft_shutdown_on_idle` setting.

.. warning::

    If the worker is not running any task but has ETA tasks reserved, the soft shutdown will not be initiated
    unless the :setting:`worker_enable_soft_shutdown_on_idle` setting is enabled, which may lead to task loss
    during the cold shutdown. When using ETA tasks, it is recommended to enable the soft shutdown on idle.
    Experiment which :setting:`worker_soft_shutdown_timeout` value works best for your setup to reduce the risk
    of task loss to a minimum.

For example, when setting ``worker_soft_shutdown_timeout=3``, the worker will allow 3 seconds for all currently
executing tasks to finish before it terminates. If the time limit is reached, the worker will initiate a cold shutdown
and cancel all currently executing tasks.

.. code-block:: console

    [INFO/MainProcess] Task myapp.long_running_task[6f748357-b2c7-456a-95de-f05c00504042] received
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 1/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 2/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 3/2000s
    ^C
    worker: Hitting Ctrl+C again will initiate cold shutdown, terminating all running tasks!

    worker: Warm shutdown (MainProcess)
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 4/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 5/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 6/2000s
    ^C
    worker: Hitting Ctrl+C again will terminate all running tasks!
    [WARNING/MainProcess] Initiating Soft Shutdown, terminating in 3 seconds
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 7/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 8/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 9/2000s
    [WARNING/MainProcess] Restoring 1 unacknowledged message(s)

- The next :sig:`QUIT` signal will cancel the tasks that are still running in the soft shutdown, but the worker
  will still wait for the time limit to finish before terminating.
- The next (2nd) :sig:`QUIT` or :sig:`INT` signal will trigger the next stage of the shutdown process.

.. _worker-hard-shutdown:

Hard Shutdown
~~~~~~~~~~~~~

.. versionadded:: 5.5

Hard shutdown is mostly for local or debug purposes, allowing to spam the :sig:`INT` (Ctrl-C) signal
to force the worker to terminate immediately. The worker will stop all currently executing tasks and
terminate immediately by raising a :exc:`@WorkerTerminate` exception in the MainProcess.

For example, notice the ``^C`` in the logs below (using the :sig:`INT` signal to move from stage to stage):

.. code-block:: console

    [INFO/MainProcess] Task myapp.long_running_task[7235ac16-543d-4fd5-a9e1-2d2bb8ab630a] received
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 1/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 2/2000s
    ^C
    worker: Hitting Ctrl+C again will initiate cold shutdown, terminating all running tasks!

    worker: Warm shutdown (MainProcess)
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 3/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 4/2000s
    ^C
    worker: Hitting Ctrl+C again will terminate all running tasks!
    [WARNING/MainProcess] Initiating Soft Shutdown, terminating in 10 seconds
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 5/2000s
    [WARNING/ForkPoolWorker-8] long_running_task is running, sleeping 6/2000s
    ^C
    Waiting gracefully for cold shutdown to complete...

    worker: Cold shutdown (MainProcess)
    ^C[WARNING/MainProcess] Restoring 1 unacknowledged message(s)

.. warning::

    The log ``Restoring 1 unacknowledged message(s)`` is misleading as it is not guaranteed that the message
    will be restored after a hard shutdown. The :ref:`worker-soft-shutdown` allows adding a time window just between
    the warm and the cold shutdown that improves the gracefulness of the shutdown process.

.. _worker-restarting:

Restarting the worker
=====================

To restart the worker you should send the `TERM` signal and start a new
instance. The easiest way to manage workers for development
is by using `celery multi`:

.. code-block:: console

    $ celery multi start 1 -A proj -l INFO -c4 --pidfile=/var/run/celery/%n.pid
    $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid

For production deployments you should be using init-scripts or a process
supervision system (see :ref:`daemonizing`).

Other than stopping, then starting the worker to restart, you can also
restart the worker using the :sig:`HUP` signal. Note that the worker
will be responsible for restarting itself so this is prone to problems and
isn't recommended in production:

.. code-block:: console

    $ kill -HUP $pid

.. note::

    Restarting by :sig:`HUP` only works if the worker is running
    in the background as a daemon (it doesn't have a controlling
    terminal).

    :sig:`HUP` is disabled on macOS because of a limitation on
    that platform.

Automatic re-connection on connection loss to broker
====================================================

.. versionadded:: 5.3

Unless :setting:`broker_connection_retry_on_startup` is set to False,
Celery will automatically retry reconnecting to the broker after the first
connection loss. :setting:`broker_connection_retry` controls whether to automatically
retry reconnecting to the broker for subsequent reconnects.

.. versionadded:: 5.1

If :setting:`worker_cancel_long_running_tasks_on_connection_loss` is set to True,
Celery will also cancel any long running task that is currently running.

.. versionadded:: 5.3

Since the message broker does not track how many tasks were already fetched before
the connection was lost, Celery will reduce the prefetch count by the number of
tasks that are currently running multiplied by :setting:`worker_prefetch_multiplier`.
The prefetch count will be gradually restored to the maximum allowed after
each time a task that was running before the connection was lost is complete.

This feature is enabled by default, but can be disabled by setting False
to :setting:`worker_enable_prefetch_count_reduction`.

.. _worker-process-signals:

Process Signals
===============

The worker's main process overrides the following signals:

+--------------+-------------------------------------------------+
| :sig:`TERM`  | Warm shutdown, wait for tasks to complete.      |
+--------------+-------------------------------------------------+
| :sig:`QUIT`  | Cold shutdown, terminate ASAP                   |
+--------------+-------------------------------------------------+
| :sig:`USR1`  | Dump traceback for all active threads.          |
+--------------+-------------------------------------------------+
| :sig:`USR2`  | Remote debug, see :mod:`celery.contrib.rdb`.    |
+--------------+-------------------------------------------------+

.. _worker-files:

Variables in file paths
=======================

The file path arguments for :option:`--logfile <celery worker --logfile>`,
:option:`--pidfile <celery worker --pidfile>`, and
:option:`--statedb <celery worker --statedb>` can contain variables that the
worker will expand:

Node name replacements
----------------------

- ``%p``:  Full node name.
- ``%h``:  Hostname, including domain name.
- ``%n``:  Hostname only.
- ``%d``:  Domain name only.
- ``%i``:  Prefork pool process index or 0 if MainProcess.
- ``%I``:  Prefork pool process index with separator.

For example, if the current hostname is ``george@foo.example.com`` then
these will expand to:

- ``--logfile=%p.log`` -> :file:`george@foo.example.com.log`
- ``--logfile=%h.log`` -> :file:`foo.example.com.log`
- ``--logfile=%n.log`` -> :file:`george.log`
- ``--logfile=%d.log`` -> :file:`example.com.log`

.. _worker-files-process-index:

Prefork pool process index
--------------------------

The prefork pool process index specifiers will expand into a different
filename depending on the process that'll eventually need to open the file.

This can be used to specify one log file per child process.

Note that the numbers will stay within the process limit even if processes
exit or if autoscale/``maxtasksperchild``/time limits are used.  That is, the number
is the *process index* not the process count or pid.

* ``%i`` - Pool process index or 0 if MainProcess.

    Where ``-n worker1@example.com -c2 -f %n-%i.log`` will result in
    three log files:

        - :file:`worker1-0.log` (main process)
        - :file:`worker1-1.log` (pool process 1)
        - :file:`worker1-2.log` (pool process 2)

* ``%I`` - Pool process index with separator.

    Where ``-n worker1@example.com -c2 -f %n%I.log`` will result in
    three log files:

        - :file:`worker1.log` (main process)
        - :file:`worker1-1.log` (pool process 1)
        - :file:`worker1-2.log` (pool process 2)

.. _worker-concurrency:

Concurrency
===========

By default multiprocessing is used to perform concurrent execution of tasks,
but you can also use :ref:`Eventlet <concurrency-eventlet>`. The number
of worker processes/threads can be changed using the
:option:`--concurrency <celery worker --concurrency>` argument and defaults
to the number of CPUs available on the machine.

.. admonition:: Number of processes (multiprocessing/prefork pool)

    More pool processes are usually better, but there's a cut-off point where
    adding more pool processes affects performance in negative ways.
    There's even some evidence to support that having multiple worker
    instances running, may perform better than having a single worker.
    For example 3 workers with 10 pool processes each. You need to experiment
    to find the numbers that works best for you, as this varies based on
    application, work load, task run times and other factors.

.. _worker-remote-control:

Remote control
==============

.. versionadded:: 2.0

.. sidebar:: The ``celery`` command

    The :program:`celery` program is used to execute remote control
    commands from the command-line. It supports all of the commands
    listed below. See :ref:`monitoring-control` for more information.

:pool support: *prefork, eventlet, gevent, thread*, blocking:*solo* (see note)
:broker support: *amqp, redis*

Workers have the ability to be remote controlled using a high-priority
broadcast message queue. The commands can be directed to all, or a specific
list of workers.

Commands can also have replies. The client can then wait for and collect
those replies. Since there's no central authority to know how many
workers are available in the cluster, there's also no way to estimate
how many workers may send a reply, so the client has a configurable
timeout — the deadline in seconds for replies to arrive in. This timeout
defaults to one second. If the worker doesn't reply within the deadline
it doesn't necessarily mean the worker didn't reply, or worse is dead, but
may simply be caused by network latency or the worker being slow at processing
commands, so adjust the timeout accordingly.

In addition to timeouts, the client can specify the maximum number
of replies to wait for. If a destination is specified, this limit is set
to the number of destination hosts.

.. note::

    The ``solo`` pool supports remote control commands,
    but any task executing will block any waiting control command,
    so it is of limited use if the worker is very busy. In that
    case you must increase the timeout waiting for replies in the client.

.. _worker-broadcast-fun:

The :meth:`~@control.broadcast` function
----------------------------------------------------

This is the client function used to send commands to the workers.
Some remote control commands also have higher-level interfaces using
:meth:`~@control.broadcast` in the background, like
:meth:`~@control.rate_limit`, and :meth:`~@control.ping`.

Sending the :control:`rate_limit` command and keyword arguments:

.. code-block:: pycon

    >>> app.control.broadcast('rate_limit',
    ...                          arguments={'task_name': 'myapp.mytask',
    ...                                     'rate_limit': '200/m'})

This will send the command asynchronously, without waiting for a reply.
To request a reply you have to use the `reply` argument:

.. code-block:: pycon

    >>> app.control.broadcast('rate_limit', {
    ...     'task_name': 'myapp.mytask', 'rate_limit': '200/m'}, reply=True)
    [{'worker1.example.com': 'New rate limit set successfully'},
     {'worker2.example.com': 'New rate limit set successfully'},
     {'worker3.example.com': 'New rate limit set successfully'}]

Using the `destination` argument you can specify a list of workers
to receive the command:

.. code-block:: pycon

    >>> app.control.broadcast('rate_limit', {
    ...     'task_name': 'myapp.mytask',
    ...     'rate_limit': '200/m'}, reply=True,
    ...                             destination=['worker1@example.com'])
    [{'worker1.example.com': 'New rate limit set successfully'}]


Of course, using the higher-level interface to set rate limits is much
more convenient, but there are commands that can only be requested
using :meth:`~@control.broadcast`.

Commands
========

.. control:: revoke

``revoke``: Revoking tasks
--------------------------
:pool support: all, terminate only supported by prefork, eventlet and gevent
:broker support: *amqp, redis*
:command: :program:`celery -A proj control revoke <task_id>`

All worker nodes keeps a memory of revoked task ids, either in-memory or
persistent on disk (see :ref:`worker-persistent-revokes`).

.. note::

    The maximum number of revoked tasks to keep in memory can be
    specified using the ``CELERY_WORKER_REVOKES_MAX`` environment
    variable, which defaults to 50000. When the limit has been exceeded,
    the revokes will be active for 10800 seconds (3 hours) before being
    expired. This value can be changed using the
    ``CELERY_WORKER_REVOKE_EXPIRES`` environment variable.

    Memory limits can also be set for successful tasks through the
    ``CELERY_WORKER_SUCCESSFUL_MAX`` and
    ``CELERY_WORKER_SUCCESSFUL_EXPIRES`` environment variables, and
    default to 1000 and 10800 respectively.

When a worker receives a revoke request it will skip executing
the task, but it won't terminate an already executing task unless
the `terminate` option is set.

.. note::

    The terminate option is a last resort for administrators when
    a task is stuck. It's not for terminating the task,
    it's for terminating the process that's executing the task, and that
    process may have already started processing another task at the point
    when the signal is sent, so for this reason you must never call this
    programmatically.

If `terminate` is set the worker child process processing the task
will be terminated. The default signal sent is `TERM`, but you can
specify this using the `signal` argument. Signal can be the uppercase name
of any signal defined in the :mod:`signal` module in the Python Standard
Library.

Terminating a task also revokes it.

**Example**

.. code-block:: pycon

    >>> result.revoke()

    >>> AsyncResult(id).revoke()

    >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed')

    >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
    ...                    terminate=True)

    >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
    ...                    terminate=True, signal='SIGKILL')




Revoking multiple tasks
-----------------------

.. versionadded:: 3.1


The revoke method also accepts a list argument, where it will revoke
several tasks at once.

**Example**

.. code-block:: pycon

    >>> app.control.revoke([
    ...    '7993b0aa-1f0b-4780-9af0-c47c0858b3f2',
    ...    'f565793e-b041-4b2b-9ca4-dca22762a55d',
    ...    'd9d35e03-2997-42d0-a13e-64a66b88a618',
    ])


The ``GroupResult.revoke`` method takes advantage of this since
version 3.1.

.. _worker-persistent-revokes:

Persistent revokes
------------------

Revoking tasks works by sending a broadcast message to all the workers,
the workers then keep a list of revoked tasks in memory. When a worker starts
up it will synchronize revoked tasks with other workers in the cluster.

The list of revoked tasks is in-memory so if all workers restart the list
of revoked ids will also vanish. If you want to preserve this list between
restarts you need to specify a file for these to be stored in by using the `--statedb`
argument to :program:`celery worker`:

.. code-block:: console

    $ celery -A proj worker -l INFO --statedb=/var/run/celery/worker.state

or if you use :program:`celery multi` you want to create one file per
worker instance so use the `%n` format to expand the current node
name:

.. code-block:: console

    celery multi start 2 -l INFO --statedb=/var/run/celery/%n.state


See also :ref:`worker-files`

Note that remote control commands must be working for revokes to work.
Remote control commands are only supported by the RabbitMQ (amqp) and Redis
at this point.

.. control:: revoke_by_stamped_headers

``revoke_by_stamped_headers``: Revoking tasks by their stamped headers
----------------------------------------------------------------------
:pool support: all, terminate only supported by prefork and eventlet
:broker support: *amqp, redis*
:command: :program:`celery -A proj control revoke_by_stamped_headers <header=value>`

This command is similar to :meth:`~@control.revoke`, but instead of
specifying the task id(s), you specify the stamped header(s) as key-value pair(s),
and each task that has a stamped header matching the key-value pair(s) will be revoked.

.. warning::

    The revoked headers mapping is not persistent across restarts, so if you
    restart the workers, the revoked headers will be lost and need to be
    mapped again.

.. warning::

    This command may perform poorly if your worker pool concurrency is high
    and terminate is enabled, since it will have to iterate over all the running
    tasks to find the ones with the specified stamped header.

**Example**

.. code-block:: pycon

    >>> app.control.revoke_by_stamped_headers({'header': 'value'})

    >>> app.control.revoke_by_stamped_headers({'header': 'value'}, terminate=True)

    >>> app.control.revoke_by_stamped_headers({'header': 'value'}, terminate=True, signal='SIGKILL')


Revoking multiple tasks by stamped headers
------------------------------------------

.. versionadded:: 5.3

The ``revoke_by_stamped_headers`` method also accepts a list argument, where it will revoke
by several headers or several values.

**Example**

.. code-block:: pycon

    >> app.control.revoke_by_stamped_headers({
    ...    'header_A': 'value_1',
    ...    'header_B': ['value_2', 'value_3'],
    })

This will revoke all of the tasks that have a stamped header ``header_A`` with value ``value_1``,
and all of the tasks that have a stamped header ``header_B`` with values ``value_2`` or ``value_3``.

**CLI Example**

.. code-block:: console

    $ celery -A proj control revoke_by_stamped_headers stamped_header_key_A=stamped_header_value_1 stamped_header_key_B=stamped_header_value_2

    $ celery -A proj control revoke_by_stamped_headers stamped_header_key_A=stamped_header_value_1 stamped_header_key_B=stamped_header_value_2 --terminate

    $ celery -A proj control revoke_by_stamped_headers stamped_header_key_A=stamped_header_value_1 stamped_header_key_B=stamped_header_value_2 --terminate --signal=SIGKILL

.. _worker-time-limits:

Time Limits
===========

.. versionadded:: 2.0

:pool support: *prefork/gevent (see note below)*

.. sidebar:: Soft, or hard?

    The time limit is set in two values, `soft` and `hard`.
    The soft time limit allows the task to catch an exception
    to clean up before it is killed: the hard timeout isn't catch-able
    and force terminates the task.

A single task can potentially run forever, if you have lots of tasks
waiting for some event that'll never happen you'll block the worker
from processing new tasks indefinitely. The best way to defend against
this scenario happening is enabling time limits.

The time limit (`--time-limit`) is the maximum number of seconds a task
may run before the process executing it is terminated and replaced by a
new process. You can also enable a soft time limit (`--soft-time-limit`),
this raises an exception the task can catch to clean up before the hard
time limit kills it:

.. code-block:: python

    from myapp import app
    from celery.exceptions import SoftTimeLimitExceeded

    @app.task
    def mytask():
        try:
            do_work()
        except SoftTimeLimitExceeded:
            clean_up_in_a_hurry()

Time limits can also be set using the :setting:`task_time_limit` /
:setting:`task_soft_time_limit` settings. You can also specify time
limits for client side operation using ``timeout`` argument of
``AsyncResult.get()`` function.

.. note::

    Time limits don't currently work on platforms that don't support
    the :sig:`SIGUSR1` signal.

.. note::

    The gevent pool does not implement soft time limits. Additionally,
    it will not enforce the hard time limit if the task is blocking.


Changing time limits at run-time
--------------------------------
.. versionadded:: 2.3

:broker support: *amqp, redis*

There's a remote control command that enables you to change both soft
and hard time limits for a task — named ``time_limit``.

Example changing the time limit for the ``tasks.crawl_the_web`` task
to have a soft time limit of one minute, and a hard time limit of
two minutes:

.. code-block:: pycon

    >>> app.control.time_limit('tasks.crawl_the_web',
                               soft=60, hard=120, reply=True)
    [{'worker1.example.com': {'ok': 'time limits set successfully'}}]

Only tasks that starts executing after the time limit change will be affected.

.. _worker-rate-limits:

Rate Limits
===========

.. control:: rate_limit

Changing rate-limits at run-time
--------------------------------

Example changing the rate limit for the `myapp.mytask` task to execute
at most 200 tasks of that type every minute:

.. code-block:: pycon

    >>> app.control.rate_limit('myapp.mytask', '200/m')

The above doesn't specify a destination, so the change request will affect
all worker instances in the cluster. If you only want to affect a specific
list of workers you can include the ``destination`` argument:

.. code-block:: pycon

    >>> app.control.rate_limit('myapp.mytask', '200/m',
    ...            destination=['celery@worker1.example.com'])

.. warning::

    This won't affect workers with the
    :setting:`worker_disable_rate_limits` setting enabled.

.. _worker-max-tasks-per-child:

Max tasks per child setting
===========================

.. versionadded:: 2.0

:pool support: *prefork*

With this option you can configure the maximum number of tasks
a worker can execute before it's replaced by a new process.

This is useful if you have memory leaks you have no control over
for example from closed source C extensions.

The option can be set using the workers
:option:`--max-tasks-per-child <celery worker --max-tasks-per-child>` argument
or using the :setting:`worker_max_tasks_per_child` setting.

.. _worker-max-memory-per-child:

Max memory per child setting
============================

.. versionadded:: 4.0

:pool support: *prefork*

With this option you can configure the maximum amount of resident
memory a worker can execute before it's replaced by a new process.

This is useful if you have memory leaks you have no control over
for example from closed source C extensions.

The option can be set using the workers
:option:`--max-memory-per-child <celery worker --max-memory-per-child>` argument
or using the :setting:`worker_max_memory_per_child` setting.

.. _worker-autoscaling:

Autoscaling
===========

.. versionadded:: 2.2

:pool support: *prefork*, *gevent*

The *autoscaler* component is used to dynamically resize the pool
based on load:

- The autoscaler adds more pool processes when there is work to do,
    - and starts removing processes when the workload is low.

It's enabled by the :option:`--autoscale <celery worker --autoscale>` option,
which needs two numbers: the maximum and minimum number of pool processes:

.. code-block:: text

        --autoscale=AUTOSCALE
             Enable autoscaling by providing
             max_concurrency,min_concurrency.  Example:
               --autoscale=10,3 (always keep 3 processes, but grow to
              10 if necessary).

You can also define your own rules for the autoscaler by subclassing
:class:`~celery.worker.autoscale.Autoscaler`.
Some ideas for metrics include load average or the amount of memory available.
You can specify a custom autoscaler with the :setting:`worker_autoscaler` setting.

.. _worker-queues:

Queues
======

A worker instance can consume from any number of queues.
By default it will consume from all queues defined in the
:setting:`task_queues` setting (that if not specified falls back to the
default queue named ``celery``).

You can specify what queues to consume from at start-up, by giving a comma
separated list of queues to the :option:`-Q <celery worker -Q>` option:

.. code-block:: console

    $ celery -A proj worker -l INFO -Q foo,bar,baz

If the queue name is defined in :setting:`task_queues` it will use that
configuration, but if it's not defined in the list of queues Celery will
automatically generate a new queue for you (depending on the
:setting:`task_create_missing_queues` option).

You can also tell the worker to start and stop consuming from a queue at
run-time using the remote control commands :control:`add_consumer` and
:control:`cancel_consumer`.

.. control:: add_consumer

Queues: Adding consumers
------------------------

The :control:`add_consumer` control command will tell one or more workers
to start consuming from a queue. This operation is idempotent.

To tell all workers in the cluster to start consuming from a queue
named "``foo``" you can use the :program:`celery control` program:

.. code-block:: console

    $ celery -A proj control add_consumer foo
    -> worker1.local: OK
        started consuming from u'foo'

If you want to specify a specific worker you can use the
:option:`--destination <celery control --destination>` argument:

.. code-block:: console

    $ celery -A proj control add_consumer foo -d celery@worker1.local

The same can be accomplished dynamically using the :meth:`@control.add_consumer` method:

.. code-block:: pycon

    >>> app.control.add_consumer('foo', reply=True)
    [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]

    >>> app.control.add_consumer('foo', reply=True,
    ...                          destination=['worker1@example.com'])
    [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]


By now we've only shown examples using automatic queues,
If you need more control you can also specify the exchange, routing_key and
even other options:

.. code-block:: pycon

    >>> app.control.add_consumer(
    ...     queue='baz',
    ...     exchange='ex',
    ...     exchange_type='topic',
    ...     routing_key='media.*',
    ...     options={
    ...         'queue_durable': False,
    ...         'exchange_durable': False,
    ...     },
    ...     reply=True,
    ...     destination=['w1@example.com', 'w2@example.com'])


.. control:: cancel_consumer

Queues: Canceling consumers
---------------------------

You can cancel a consumer by queue name using the :control:`cancel_consumer`
control command.

To force all workers in the cluster to cancel consuming from a queue
you can use the :program:`celery control` program:

.. code-block:: console

    $ celery -A proj control cancel_consumer foo

The :option:`--destination <celery control --destination>` argument can be
used to specify a worker, or a list of workers, to act on the command:

.. code-block:: console

    $ celery -A proj control cancel_consumer foo -d celery@worker1.local


You can also cancel consumers programmatically using the
:meth:`@control.cancel_consumer` method:

.. code-block:: console

    >>> app.control.cancel_consumer('foo', reply=True)
    [{u'worker1.local': {u'ok': u"no longer consuming from u'foo'"}}]

.. control:: active_queues

Queues: List of active queues
-----------------------------

You can get a list of queues that a worker consumes from by using
the :control:`active_queues` control command:

.. code-block:: console

    $ celery -A proj inspect active_queues
    [...]

Like all other remote control commands this also supports the
:option:`--destination <celery inspect --destination>` argument used
to specify the workers that should reply to the request:

.. code-block:: console

    $ celery -A proj inspect active_queues -d celery@worker1.local
    [...]


This can also be done programmatically by using the
:meth:`~celery.app.control.Inspect.active_queues` method:

.. code-block:: pycon

    >>> app.control.inspect().active_queues()
    [...]

    >>> app.control.inspect(['worker1.local']).active_queues()
    [...]

.. _worker-inspect:

Inspecting workers
==================

:class:`@control.inspect` lets you inspect running workers. It
uses remote control commands under the hood.

You can also use the ``celery`` command to inspect workers,
and it supports the same commands as the :class:`@control` interface.

.. code-block:: pycon

    >>> # Inspect all nodes.
    >>> i = app.control.inspect()

    >>> # Specify multiple nodes to inspect.
    >>> i = app.control.inspect(['worker1.example.com',
                                'worker2.example.com'])

    >>> # Specify a single node to inspect.
    >>> i = app.control.inspect('worker1.example.com')

.. _worker-inspect-registered-tasks:

Dump of registered tasks
------------------------

You can get a list of tasks registered in the worker using the
:meth:`~celery.app.control.Inspect.registered`:

.. code-block:: pycon

    >>> i.registered()
    [{'worker1.example.com': ['tasks.add',
                              'tasks.sleeptask']}]

.. _worker-inspect-active-tasks:

Dump of currently executing tasks
---------------------------------

You can get a list of active tasks using
:meth:`~celery.app.control.Inspect.active`:

.. code-block:: pycon

    >>> i.active()
    [{'worker1.example.com':
        [{'name': 'tasks.sleeptask',
          'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
          'args': '(8,)',
          'kwargs': '{}'}]}]

.. _worker-inspect-eta-schedule:

Dump of scheduled (ETA) tasks
-----------------------------

You can get a list of tasks waiting to be scheduled by using
:meth:`~celery.app.control.Inspect.scheduled`:

.. code-block:: pycon

    >>> i.scheduled()
    [{'worker1.example.com':
        [{'eta': '2010-06-07 09:07:52', 'priority': 0,
          'request': {
            'name': 'tasks.sleeptask',
            'id': '1a7980ea-8b19-413e-91d2-0b74f3844c4d',
            'args': '[1]',
            'kwargs': '{}'}},
         {'eta': '2010-06-07 09:07:53', 'priority': 0,
          'request': {
            'name': 'tasks.sleeptask',
            'id': '49661b9a-aa22-4120-94b7-9ee8031d219d',
            'args': '[2]',
            'kwargs': '{}'}}]}]

.. note::

    These are tasks with an ETA/countdown argument, not periodic tasks.

.. _worker-inspect-reserved:

Dump of reserved tasks
----------------------

Reserved tasks are tasks that have been received, but are still waiting to be
executed.

You can get a list of these using
:meth:`~celery.app.control.Inspect.reserved`:

.. code-block:: pycon

    >>> i.reserved()
    [{'worker1.example.com':
        [{'name': 'tasks.sleeptask',
          'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
          'args': '(8,)',
          'kwargs': '{}'}]}]


.. _worker-statistics:

Statistics
----------

The remote control command ``inspect stats`` (or
:meth:`~celery.app.control.Inspect.stats`) will give you a long list of useful (or not
so useful) statistics about the worker:

.. code-block:: console

    $ celery -A proj inspect stats

For the output details, consult the reference documentation of :meth:`~celery.app.control.Inspect.stats`.

Additional Commands
===================

.. control:: shutdown

Remote shutdown
---------------

This command will gracefully shut down the worker remotely:

.. code-block:: pycon

    >>> app.control.broadcast('shutdown') # shutdown all workers
    >>> app.control.broadcast('shutdown', destination='worker1@example.com')

.. control:: ping

Ping
----

This command requests a ping from alive workers.
The workers reply with the string 'pong', and that's just about it.
It will use the default one second timeout for replies unless you specify
a custom timeout:

.. code-block:: pycon

    >>> app.control.ping(timeout=0.5)
    [{'worker1.example.com': 'pong'},
     {'worker2.example.com': 'pong'},
     {'worker3.example.com': 'pong'}]

:meth:`~@control.ping` also supports the `destination` argument,
so you can specify the workers to ping:

.. code-block:: pycon

    >>> ping(['worker2.example.com', 'worker3.example.com'])
    [{'worker2.example.com': 'pong'},
     {'worker3.example.com': 'pong'}]

.. _worker-enable-events:

.. control:: enable_events
.. control:: disable_events

Enable/disable events
---------------------

You can enable/disable events by using the `enable_events`,
`disable_events` commands. This is useful to temporarily monitor
a worker using :program:`celery events`/:program:`celerymon`.

.. code-block:: pycon

    >>> app.control.enable_events()
    >>> app.control.disable_events()

.. _worker-custom-control-commands:

Writing your own remote control commands
========================================

There are two types of remote control commands:

- Inspect command

    Does not have side effects, will usually just return some value
    found in the worker, like the list of currently registered tasks,
    the list of active tasks, etc.

- Control command

    Performs side effects, like adding a new queue to consume from.

Remote control commands are registered in the control panel and
they take a single argument: the current
:class:`!celery.worker.control.ControlDispatch` instance.
From there you have access to the active
:class:`~celery.worker.consumer.Consumer` if needed.

Here's an example control command that increments the task prefetch count:

.. code-block:: python

    from celery.worker.control import control_command

    @control_command(
        args=[('n', int)],
        signature='[N=1]',  # <- used for help on the command-line.
    )
    def increase_prefetch_count(state, n=1):
        state.consumer.qos.increment_eventually(n)
        return {'ok': 'prefetch count incremented'}

Make sure you add this code to a module that is imported by the worker:
this could be the same module as where your Celery app is defined, or you
can add the module to the :setting:`imports` setting.

Restart the worker so that the control command is registered, and now you
can call your command using the :program:`celery control` utility:

.. code-block:: console

    $ celery -A proj control increase_prefetch_count 3

You can also add actions to the :program:`celery inspect` program,
for example one that reads the current prefetch count:

.. code-block:: python

    from celery.worker.control import inspect_command

    @inspect_command()
    def current_prefetch_count(state):
        return {'prefetch_count': state.consumer.qos.value}


After restarting the worker you can now query this value using the
:program:`celery inspect` program:

.. code-block:: console

    $ celery -A proj inspect current_prefetch_count