1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743
|
(parallel-direct)=
# IPython's Direct interface
The direct interface represents one possible way of working with a set of
IPython engines. The basic idea behind the direct interface is that the
capabilities of each engine are directly and explicitly exposed to the user.
Thus, in the direct interface, each engine is given an id that is used to
identify the engine and give it work to do. This interface is very intuitive
and is designed with interactive usage in mind, and is the best place for
new users of IPython to begin.
## Starting the IPython controller and engines
In general, in this tutorial,
each step will start with a fresh cluster.
There is always a choice when starting an interactive session:
Option 1. starting a new cluster
```python
import ipyparallel as ipp
cluster = ipp.Cluster(n=4)
cluster.start_cluster_sync()
```
Option 2. connecting to an existing cluster,
e.g. if it were started via {command}`ipcluster start`
or another notebook,
or a JupyterLab extension.
```python
import ipyparallel as ipp
cluster = ipp.Cluster.from_file()
```
No arguments are required for the default cluster (e.g. `ipcluster start` with no arguments),
but `profile` and/or `cluster_id` would be typical arguments to specify a cluster.
For more detailed information about starting the controller and engines, see
our {ref}`introduction <parallel-overview>` to using IPython for parallel computing.
## Creating a `DirectView`
The first step is to connect a {class}`.Client` to your cluster:
```ipython
In [2]: rc = cluster.connect_client_sync()
```
To make sure there are engines connected to the controller, users can get a list
of engine ids:
```ipython
In [3]: rc.wait_for_engines(4); rc.ids
Out[3]: [0, 1, 2, 3]
```
Here we see that there are four engines ready to do work for us.
For direct execution, we will make use of a {class}`DirectView` object, which can be
constructed via list-access to the client:
```ipython
In [4]: dview = rc[:] # use all engines
```
```{seealso}
For more information, see the in-depth explanation of {ref}`Views <parallel-details>`.
```
## Quick and easy parallelism
In many cases, you want to call a Python function on a sequence of
objects, but _in parallel_. IPython Parallel provides a simple way
of accomplishing this: using the DirectView's {meth}`~DirectView.map` method.
### Parallel map
Python's builtin {func}`map` functions allows a function to be applied to a
sequence element-by-element. This type of code is typically trivial to
parallelize. In fact, since IPython's interface is all about functions anyway,
you can use the builtin {func}`map` with a {class}`RemoteFunction`, or a
DirectView's {meth}`map` method:
```ipython
In [62]: serial_result = list(map(lambda x:x**10, range(32)))
In [63]: parallel_result = dview.map_sync(lambda x: x**10, range(32))
In [64]: serial_result == parallel_result
Out[64]: True
```
```{note}
The {class}`DirectView`'s version of {meth}`map` does
not do dynamic load balancing. For a load-balanced version, use a
{class}`LoadBalancedView`.
```
## Calling Python functions
The most basic type of operation that can be performed on the engines is to
execute Python code or call Python functions. Executing Python code can be
done in blocking or non-blocking mode (non-blocking is default) using the
{meth}`.View.execute` method, and calling functions can be done via the
{meth}`.View.apply` method.
### apply
The main method for doing remote execution (in fact, almost all methods that
communicate with the engines are built on top of it), is {meth}`View.apply`.
We strive to provide the cleanest interface we can, so `apply` has the following
signature:
```python
view.apply(f, *args, **kwargs)
```
There are some controls to influence the behavior of `apply`, called flags.
Views store the default values for these flags as attributes.
The `DirectView` has these flags:
dv.block
: whether to wait for the result, or return an {class}`AsyncResult` object
immediately
dv.track
: whether to instruct pyzmq to track when zeromq is done sending the message.
This is primarily useful for non-copying sends of numpy arrays that you plan to
edit in-place. You need to know when it becomes safe to edit the buffer
without corrupting the message.
There is a performance cost to enabling tracking,
so it is not recommended except for sending very large messages.
dv.targets
: The engines associated with this View.
Creating a view is done as if the client is a Python 'container' of engines: index-access on a client creates a {class}`.DirectView`.
```ipython
In [4]: view = rc[1:3]
Out[4]: <DirectView [1, 2]>
In [5]: view.apply<tab>
view.apply view.apply_async view.apply_sync
```
For convenience, you can specify blocking behavior explicitly for a single call with the extra sync/async methods.
### Blocking execution
In blocking mode, the {class}`.DirectView` object (called `dview` in
these examples) submits the command to the controller, which places the
command in the engines' queues for execution. The {meth}`apply` call then
blocks until the engines are done executing the command:
```ipython
In [2]: dview = rc[:] # A DirectView of all engines
In [3]: dview.block=True
In [4]: dview['a'] = 5
In [5]: dview['b'] = 10
In [6]: dview.apply(lambda x: a+b+x, 27)
Out[6]: [42, 42, 42, 42]
```
You can also select blocking execution on a call-by-call basis with the {meth}`apply_sync`
method:
```ipython
In [7]: dview.block = False
In [8]: dview.apply_sync(lambda x: a+b+x, 27)
Out[8]: [42, 42, 42, 42]
```
Python commands can be executed as strings on specific engines by using a View's `execute`
method:
```ipython
In [6]: rc[::2].execute('c = a + b')
In [7]: rc[1::2].execute('c = a - b')
In [8]: dview['c'] # shorthand for dview.pull('c', block=True)
Out[8]: [15, -5, 15, -5]
```
### async execution
In non-blocking (async) mode, {meth}`apply` submits the command to be executed and
then returns a {class}`AsyncResult` object immediately. The
{class}`AsyncResult` object gives you a way of getting a result at a later
time through its {meth}`get` method.
```{seealso}
Docs on the {ref}`AsyncResult <asyncresult>` object.
```
This allows you to quickly submit long-running commands without blocking your
local IPython session:
```ipython
# define our function
In [6]: def wait(t):
....: import time
....: tic = time.time()
....: time.sleep(t)
....: return time.time()-tic
# In non-blocking mode
In [7]: ar = dview.apply_async(wait, 2)
# Now block for the result
In [8]: ar.get()
Out[8]: [2.0006198883056641, 1.9997570514678955, 1.9996809959411621, 2.0003249645233154]
# Again in non-blocking mode
In [9]: ar = dview.apply_async(wait, 10)
# Poll to see if the result is ready
In [10]: ar.ready()
Out[10]: False
# ask for the result, but wait a maximum of 1 second:
In [45]: ar.get(1)
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
/home/you/<ipython-input-45-7cd858bbb8e0> in <module>()
----> 1 ar.get(1)
/path/to/site-packages/IPython/parallel/asyncresult.pyc in get(self, timeout)
62 raise self._exception
63 else:
---> 64 raise error.TimeoutError("Result not ready.")
65
66 def ready(self):
TimeoutError: Result not ready.
```
```{Note}
Note the import inside the function. This is a common model, to ensure
that the appropriate modules are imported where the task is run. You can
also manually import modules into the engine(s) namespace(s) via
`view.execute('import numpy')`.
```
Often, it is desirable to wait until a set of {class}`AsyncResult` objects
are done. For this, there is a the method {meth}`wait`. This method takes a
collection of {class}`AsyncResult` objects (or `msg_ids` or integer indices to the client's history),
and blocks until all of the associated results are ready:
```ipython
In [72]: dview.block = False
# A trivial list of AsyncResults objects
In [73]: ar_list = [dview.apply_async(wait, 3) for i in range(10)]
# Wait until all of them are done
In [74]: dview.wait(ar_list)
# Then, their results are ready using get()
In [75]: ar_list[0].get()
Out[75]: [2.9982571601867676, 2.9982588291168213, 2.9987530708312988, 2.9990990161895752]
```
### The `block` and `targets` keyword arguments and attributes
Most DirectView methods (excluding {meth}`apply`) accept `block` and
`targets` as keyword arguments. As we have seen above, these keyword arguments control the
blocking mode and which engines the command is applied to. The {class}`View` class also has
{attr}`block` and {attr}`targets` attributes that control the default behavior when the keyword
arguments are not provided. Thus the following logic is used for {attr}`block` and {attr}`targets`:
- If no keyword argument is provided, the instance attributes are used.
- The keyword arguments, if provided overrides the instance attributes for
the duration of a single call.
The following examples demonstrate how to use the instance attributes:
```ipython
In [16]: dview.targets = [0, 2]
In [17]: dview.block = False
In [18]: ar = dview.apply(lambda : 10)
In [19]: ar.get()
Out[19]: [10, 10]
In [20]: dview.targets = rc.ids # all engines (4)
In [21]: dview.block = True
In [22]: dview.apply(lambda : 42)
Out[22]: [42, 42, 42, 42]
```
The {attr}`block` and {attr}`targets` instance attributes of the
{class}`.DirectView` also determine the behavior of the parallel magic commands.
```{seealso}
See the documentation of the {ref}`Parallel Magics <parallel-magics>`.
```
## Moving Python objects around
In addition to calling functions and executing code on engines, you can
transfer Python objects between your IPython session and the engines. In
IPython, these operations are called {meth}`push` (sending an object to the
engines) and {meth}`pull` (getting an object from the engines).
### Basic push and pull
Here are some examples of how you use {meth}`push` and {meth}`pull`:
```ipython
In [38]: dview.push(dict(a=1.03234, b=3453))
Out[38]: [None, None, None, None]
In [39]: dview.pull('a')
Out[39]: [ 1.03234, 1.03234, 1.03234, 1.03234]
In [40]: dview.pull('b', targets=0)
Out[40]: 3453
In [41]: dview.pull(('a', 'b'))
Out[41]: [ [1.03234, 3453], [1.03234, 3453], [1.03234, 3453], [1.03234, 3453] ]
In [42]: dview.push(dict(c='speed'))
Out[42]: [None, None, None, None]
```
In non-blocking mode {meth}`push` and {meth}`pull` also return
{class}`AsyncResult` objects:
```ipython
In [48]: ar = dview.pull('a', block=False)
In [49]: ar.get()
Out[49]: [1.03234, 1.03234, 1.03234, 1.03234]
```
### Dictionary interface
Since a Python namespace is a {class}`dict`, {class}`DirectView` objects provide
dictionary-style access by key and methods such as {meth}`get` and
{meth}`update` for convenience. This make the remote namespaces of the engines
appear as a local dictionary. Underneath, these methods call {meth}`apply`:
```ipython
In [51]: dview['a'] = ['foo', 'bar']
In [52]: dview['a']
Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
```
### Scatter and gather
Sometimes it is useful to partition a sequence and push the partitions to
different engines. In MPI language, this is know as scatter/gather and we
follow that terminology. However, it is important to remember that in
IPython's {class}`Client` class, {meth}`scatter` is from the
interactive IPython session to the engines and {meth}`gather` is from the
engines back to the interactive IPython session. For scatter/gather operations
between engines, MPI, pyzmq, or some other direct interconnect should be used.
```ipython
In [58]: dview.scatter('a',range(16))
Out[58]: [None,None,None,None]
In [59]: dview['a']
Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
In [60]: dview.gather('a') # This will show you the status of gather.
Out[60]: <AsyncMapResult: gather:finished>
In [61]: dview.gather('a').get() # This will give you the result.
Out[61]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
In [62]: dview.gather('a')[3] # You can also direct call the result.
Out[62]: [2]
```
## Other things to look at
### Signaling engines
New in IPython Parallel 7.0 is the {meth}`Client.send_signal` method.
This lets you directly interrupt engines, which might be running a blocking task
that you want to cancel.
This is also available via the Cluster API.
Unlike the Cluster API, though,
which only allows interrupting whole engine 'sets' (usally all engines in the cluster),
the client API allows interrupting individual engines.
```ipython
In [9]: ar = rc[:].apply_async(time.sleep, 5)
In [10]: rc.send_signal(signal.SIGINT)
Out[10]: <Future at 0x7f91a9489fd0 state=pending>
In [11]: ar.get()
[12:apply]:
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<string> in <module>
KeyboardInterrupt:
[13:apply]:
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<string> in <module>
KeyboardInterrupt:
[14:apply]:
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<string> in <module>
KeyboardInterrupt:
[15:apply]:
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<string> in <module>
KeyboardInterrupt:
```
### Remote function decorators
Remote functions are like normal functions, but when they are called
they execute on one or more engines rather than locally. IPython provides
two decorators for producing parallel functions.
The first is `@remote`, which calls the function on every engine of a view.
```ipython
In [10]: @dview.remote(block=True)
....: def getpid():
....: import os
....: return os.getpid()
....:
In [11]: getpid()
Out[11]: [12345, 12346, 12347, 12348]
```
The `@parallel` decorator creates parallel functions, that break up an element-wise
operations and distribute them, reconstructing the result.
```ipython
In [12]: import numpy as np
In [13]: A = np.random.random((64,48))
In [14]: @dview.parallel(block=True)
....: def pmul(A,B):
....: return A*B
In [15]: C_local = A*A
In [16]: C_remote = pmul(A,A)
In [17]: (C_local == C_remote).all()
Out[17]: True
```
Calling a `@parallel` function _does not_ correspond to map. It is used for splitting
element-wise operations that operate on a sequence or array. For `map` behavior,
parallel functions have a map _method_.
| call | pfunc(seq) | pfunc.map(seq) |
| ------------------ | --------------------------- | --------------------------- |
| # of tasks | # of engines (1 per engine) | # of engines (1 per engine) |
| # of remote calls | # of engines (1 per engine) | `len(seq)` |
| argument to remote | `seq[i:j]` (sub-sequence) | `seq[i]` (single element) |
A quick example to illustrate the difference in arguments for the two modes:
```ipython
In [16]: @dview.parallel(block=True)
....: def echo(x):
....: return str(x)
In [17]: echo(range(5))
Out[17]: ['[0, 1]', '[2]', '[3]', '[4]']
In [18]: echo.map(range(5))
Out[18]: ['0', '1', '2', '3', '4']
```
```{seealso}
See the {func}`~.remotefunction.parallel` and {func}`~.remotefunction.remote`
decorators for options.
```
### How to do parallel list comprehensions
In many cases list comprehensions are nicer than using the map function. While
we don't have fully parallel list comprehensions, it is simple to get the
basic effect using {meth}`scatter` and {meth}`gather`:
```ipython
In [66]: dview.scatter('x',range(64))
In [67]: %px y = [i**10 for i in x]
Parallel execution on engines: [0, 1, 2, 3]
In [68]: y = dview.gather('y')
In [69]: print y
[0, 1, 1024, 59049, 1048576, 9765625, 60466176, 282475249, 1073741824,...]
```
### Remote imports
Sometimes you may want to import packages both in your interactive session
and on your remote engines. This can be done with the context manager
created by a DirectView's {meth}`sync_imports` method:
```ipython
In [69]: with dview.sync_imports():
....: import numpy
importing numpy on engine(s)
```
Any imports made inside the block will also be performed on the view's engines.
sync_imports also takes a `local` boolean flag that defaults to True, which specifies
whether the local imports should also be performed. However, support for `local=False`
has not been implemented, so only packages that can be imported locally will work
this way. Note that the usual renaming of the import handle in the same line like in
`import matplotlib.pyplot as plt` does not work on the remote engine, the `as plt` is
ignored remotely, while it executes locally. One could rename the remote handle with
`%px plt = pyplot` though after the import.
You can also specify imports via the `@ipp.require` decorator. This is a decorator
designed for use in dependencies, but can be used to handle remote imports as well.
Modules or module names passed to `@ipp.require` will be imported before the decorated
function is called. If they cannot be imported, the decorated function will never
execute and will fail with an UnmetDependencyError. Failures of single Engines will
be collected and raise a CompositeError, as demonstrated in the next section.
```ipython
In [70]: @ipp.require('re')
....: def findall(pat, x):
....: # re is guaranteed to be available
....: return re.findall(pat, x)
# you can also pass modules themselves, that you already have locally:
In [71]: @ipp.require(time)
....: def wait(t):
....: time.sleep(t)
....: return t
```
```{note}
{func}`sync_imports` does not allow `import foo as bar` syntax,
because the assignment represented by the `as bar` part is not
available to the import hook.
```
(parallel-exceptions)=
### Parallel exceptions
Parallel commands can raise Python exceptions,
just like serial commands. This is complicated by the fact that a single
parallel command can raise multiple exceptions (one for each engine
the command was run on). To express this idea, we have a
{exc}`CompositeError` exception class that will be raised when there are mulitple errors. The
{exc}`CompositeError` class is a special type of exception that wraps one or
more other exceptions. Here is how it works:
```ipython
In [78]: dview.block = True
In [79]: dview.execute("1/0")
[0:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[1:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[2:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[3:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
```
Notice how the error message printed when {exc}`CompositeError` is raised has
information about the individual exceptions that were raised on each engine.
If you want, you can even raise one of these original exceptions:
```ipython
In [80]: try:
....: dview.execute('1/0', block=True)
....: except ipp.CompositeError as e:
....: e.raise_exception()
....:
....:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
```
If you are working in IPython, you can type `%debug` after one of
these {exc}`CompositeError` exceptions is raised and inspect the exception:
```ipython
In [81]: dview.execute('1/0')
[0:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[1:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[2:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
[3:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
In [82]: %debug
> /.../site-packages/IPython/parallel/client/asyncresult.py(125)get()
124 else:
--> 125 raise self._exception
126 else:
# Here, self._exception is the CompositeError instance:
ipdb> e = self._exception
ipdb> e
CompositeError(4)
# we can tab-complete on e to see available methods:
ipdb> e.<TAB>
e.args e.message e.traceback
e.elist e.msg
e.ename e.print_traceback
e.engine_info e.raise_exception
e.evalue e.render_traceback
# We can then display the individual tracebacks, if we want:
ipdb> e.print_traceback(1)
[1:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
```
If you have 100 engines, you probably don't want to see 100 identical tracebacks
for a NameError because of a small typo.
For this reason, CompositeError truncates the list of exceptions it will print
to {attr}`CompositeError.tb_limit` (default is five).
You can change this limit to suit your needs with:
```ipython
In [21]: ipp.CompositeError.tb_limit = 1
In [22]: %px x=z
[0:execute]:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
----> 1 x=z
NameError: name 'z' is not defined
... 3 more exceptions ...
```
All of this same error handling magic works the same in non-blocking mode:
```ipython
In [83]: dview.block=False
In [84]: ar = dview.execute('1/0')
In [85]: ar.get()
[0:execute]:
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
----> 1 1/0
ZeroDivisionError: integer division or modulo by zero
... 3 more exceptions ...
```
Sometimes you still want to get the successful subset, even when there was an error.
Like {py:func}`asyncio.gather`, {meth}`.AsyncResult.get` and map functions accept a `return_exception` argument
(new in IPython Parallel 7.0),
to return the Exception objects among results instead of raising the first error encountered.
```ipython
In [89]: ar = dview.apply_async(lambda: 1/0)
In [90]: ar.get(return_exceptions=True)
Out[90]:
[<Remote[0]:ZeroDivisionError(division by zero)>,
<Remote[1]:ZeroDivisionError(division by zero)>,
<Remote[2]:ZeroDivisionError(division by zero)>,
<Remote[3]:ZeroDivisionError(division by zero)>]
```
```{versionadded} 7.0 The `return_exceptions` feature
```
```
|