File: deploying-docker.rst

package info (click to toggle)
dask 2024.12.1%2Bdfsg-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 20,024 kB
  • sloc: python: 105,182; javascript: 1,917; makefile: 159; sh: 88
file content (77 lines) | stat: -rw-r--r-- 2,868 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
Docker Images
=============

Example docker images are maintained at https://github.com/dask/dask-docker .

Each image installs the full Dask conda environment (including the distributed
scheduler), Numpy, and Pandas on top of a Miniconda installation on top of
a Debian image.

These images are large, around 1GB.

-   ``ghcr.io/dask/dask``: This a normal debian + miniconda image with the full Dask
    conda package (including the distributed scheduler), Numpy, and Pandas.
    This image is about 1GB in size.

-   ``ghcr.io/dask/dask-notebook``: This is based on the
    `Jupyter base-notebook image <https://hub.docker.com/r/jupyter/base-notebook/>`_
    and so it is suitable for use both normally as a Jupyter server, and also as
    part of a JupyterHub deployment.  It also includes a matching Dask software
    environment described above.  This image is about 2GB in size.

Example
-------

Here is a simple example on a dedicated virtual network

.. code-block:: bash

   docker network create dask

   docker run --network dask -p 8787:8787 --name scheduler ghcr.io/dask/dask dask-scheduler  # start scheduler

   docker run --network dask ghcr.io/dask/dask dask-worker scheduler:8786 # start worker
   docker run --network dask ghcr.io/dask/dask dask-worker scheduler:8786 # start worker
   docker run --network dask ghcr.io/dask/dask dask-worker scheduler:8786 # start worker

   docker run --network dask -p 8888:8888 ghcr.io/dask/dask-notebook  # start Jupyter server

Then from within the notebook environment you can connect to the Dask cluster like this:

.. code-block:: python

   from dask.distributed import Client
   client = Client("scheduler:8786")
   client

Extensibility
-------------

Users can mildly customize the software environment by populating the
environment variables ``EXTRA_APT_PACKAGES``, ``EXTRA_CONDA_PACKAGES``, and
``EXTRA_PIP_PACKAGES``.  If these environment variables are set in the container,
they will trigger calls to the following respectively::

   apt-get install $EXTRA_APT_PACKAGES
   conda install $EXTRA_CONDA_PACKAGES
   python -m pip install $EXTRA_PIP_PACKAGES

For example, the following ``conda`` installs the ``joblib`` package into
the Dask worker software environment:

.. code-block:: bash

   docker run --network dask -e EXTRA_CONDA_PACKAGES="joblib" ghcr.io/dask/dask dask-worker scheduler:8786

Note that using these can significantly delay the container from starting,
especially when using ``apt``, or ``conda`` (``pip`` is relatively fast).

Remember that it is important for software versions to match between Dask
workers and Dask clients.  As a result, it is often useful to include the same
extra packages in both Jupyter and Worker images.

Source
------

Docker files are maintained at https://github.com/dask/dask-docker.
This repository also includes a docker-compose configuration.