1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
|
# Using IPython for parallel computing
> ```{eval-rst}
>
> :Release: |release|
> :Date: |today|
> ```
(install)=
## Installing IPython Parallel
As of 4.0, IPython parallel is now a standalone package called {mod}`ipyparallel`.
You can install it with:
```
pip install ipyparallel
```
or:
```
conda install ipyparallel
```
As of IPython Parallel 7,
this will include installing/enabling an extension for both the classic Jupyter Notebook
and JupyterLab ≥ 3.0.
## Quickstart
IPython Parallel
A quick example to:
1. allocate a cluster (collection of IPython engines for use in parallel)
2. run a collection of tasks on the cluster
3. wait interactively for results
4. cleanup resources after the task is done
```python
import time
import ipyparallel as ipp
task_durations = [1] * 25
# request a cluster
with ipp.Cluster() as rc:
# get a view on the cluster
view = rc.load_balanced_view()
# submit the tasks
asyncresult = view.map_async(time.sleep, task_durations)
# wait interactively for results
asyncresult.wait_interactive()
# retrieve actual results
result = asyncresult.get()
# at this point, the cluster processes have been shutdown
```
<video autoplay loop width="100%">
<source src="_static/basic.mp4"/>
</video>
You can similarly run MPI code using IPyParallel (requires [mpi4py](https://mpi4py.readthedocs.io/en/stable/install.html)):
```python
import ipyparallel as ipp
def mpi_example():
from mpi4py import MPI
comm = MPI.COMM_WORLD
return f"Hello World from rank {comm.Get_rank()}. total ranks={comm.Get_size()}"
# request an MPI cluster with 4 engines
with ipp.Cluster(engines='mpi', n=4) as rc:
# get a broadcast_view on the cluster which is best
# suited for MPI style computation
view = rc.broadcast_view()
# run the mpi_example function on all engines in parallel
r = view.apply_sync(mpi_example)
# Retrieve and print the result from the engines
print("\n".join(r))
# at this point, the cluster processes have been shutdown
```

Follow the [tutorial][] to learn more.
[tutorial]: ./tutorial/index
## Contents
```{toctree}
:maxdepth: 2
tutorial/index
reference/index
```
```{toctree}
:maxdepth: 2
examples/index
```
```{toctree}
:maxdepth: 1
changelog
```
# IPython Parallel API
```{toctree}
:maxdepth: 2
api/ipyparallel.rst
```
# Indices and tables
- {ref}`genindex`
- {ref}`modindex`
- {ref}`search`
|