File: demos.md

package info (click to toggle)
ipyparallel 8.8.0-6
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 12,412 kB
  • sloc: python: 21,991; javascript: 267; makefile: 29; sh: 28
file content (208 lines) | stat: -rw-r--r-- 7,599 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
(parallel-examples)=

# Parallel examples

In this section we describe two more involved examples of using an IPython
cluster to perform a parallel computation. We will be doing some plotting,
so we start IPython with matplotlib integration by typing:

```
ipython --matplotlib
```

at the system command line.
Or you can enable matplotlib integration at any point with:

```ipython
In [1]: %matplotlib
```

## 150 million digits of pi

In this example we would like to study the distribution of digits in the
number pi (in base 10). While it is not known if pi is a normal number (a
number is normal in base 10 if 0-9 occur with equal likelihood) numerical
investigations suggest that it is. We will begin with a serial calculation on
10,000 digits of pi and then perform a parallel calculation involving 150
million digits.

In both the serial and parallel calculation we will be using functions defined
in the {file}`pidigits.py` file, which is available in the
{file}`examples/parallel` directory of the IPython source distribution.
These functions provide basic facilities for working with the digits of pi and
can be loaded into IPython by putting {file}`pidigits.py` in your current
working directory and then doing:

```ipython
In [1]: run pidigits.py
```

### Serial calculation

For the serial calculation, we will use [SymPy](https://www.sympy.org) to
calculate 10,000 digits of pi and then look at the frequencies of the digits
0-9. Out of 10,000 digits, we expect each digit to occur 1,000 times. While
SymPy is capable of calculating many more digits of pi, our purpose here is to
set the stage for the much larger parallel calculation.

In this example, we use two functions from {file}`pidigits.py`:
{func}`one_digit_freqs` (which calculates how many times each digit occurs)
and {func}`plot_one_digit_freqs` (which uses Matplotlib to plot the result).
Here is an interactive IPython session that uses these functions with
SymPy:

```ipython
In [7]: import sympy

In [8]: pi = sympy.pi.evalf(40)

In [9]: pi
Out[9]: 3.141592653589793238462643383279502884197

In [10]: pi = sympy.pi.evalf(10000)

In [11]: digits = (d for d in str(pi)[2:])  # create a sequence of digits

In [13]: freqs = one_digit_freqs(digits)

In [14]: plot_one_digit_freqs(freqs)
Out[14]: [<matplotlib.lines.Line2D object at 0x18a55290>]
```

The resulting plot of the single digit counts shows that each digit occurs
approximately 1,000 times, but that with only 10,000 digits the
statistical fluctuations are still rather large:

```{image} figs/single_digits.*

```

It is clear that to reduce the relative fluctuations in the counts, we need
to look at many more digits of pi. That brings us to the parallel calculation.

### Parallel calculation

Calculating many digits of pi is a challenging computational problem in itself.
Because we want to focus on the distribution of digits in this example, we
will use pre-computed digit of pi from the website of Professor Yasumasa
Kanada at the University of Tokyo (<https://super-computing.org>). These
digits come in a set of text files (<ftp://pi.super-computing.org/.2/pi200m/>)
that each have 10 million digits of pi.

For the parallel calculation, we have copied these files to the local hard
drives of the compute nodes. A total of 15 of these files will be used, for a
total of 150 million digits of pi. To make things a little more interesting we
will calculate the frequencies of all 2 digits sequences (00-99) and then plot
the result using a 2D matrix in Matplotlib.

The overall idea of the calculation is simple: each IPython engine will
compute the two digit counts for the digits in a single file. Then in a final
step the counts from each engine will be added up. To perform this
calculation, we will need two top-level functions from {file}`pidigits.py`,
{func}`compute_two_digit_freqs` and {func}`reduce_freqs`:

```{literalinclude} ../examples/pi/pidigits.py
:language: python
:lines: 52-67
```

We will also use the {func}`plot_two_digit_freqs` function to plot the
results. The code to run this calculation in parallel is contained in
{file}`examples/parallel/parallelpi.py`. This code can be run in parallel
using IPython by following these steps:

1. Use {command}`ipcluster` to start 15 engines. We used 16 cores of an SGE linux
   cluster (1 controller + 15 engines).
2. With the file {file}`parallelpi.py` in your current working directory, open
   up IPython, enable matplotlib, and type `run parallelpi.py`. This will download
   the pi files via ftp the first time you run it, if they are not
   present in the Engines' working directory.

When run on our 16 cores, we observe a speedup of 14.2x. This is slightly
less than linear scaling (16x) because the controller is also running on one of
the cores.

To emphasize the interactive nature of IPython, we now show how the
calculation can also be run by typing the commands from
{file}`parallelpi.py` interactively into IPython:

```ipython
In [1]: import ipyparallel as ipp

# The Client allows us to use the engines interactively.
# We pass Client the name of the cluster profile we
# are using.
In [2]: c = ipp.Client(profile='mycluster')
In [3]: v = c[:]

In [3]: c.ids
Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]

In [4]: run pidigits.py

In [5]: filestring = 'pi200m.ascii.%(i)02dof20'

# Create the list of files to process.
In [6]: files = [filestring % {'i':i} for i in range(1,16)]

In [7]: files
Out[7]:
['pi200m.ascii.01of20',
 'pi200m.ascii.02of20',
 'pi200m.ascii.03of20',
 'pi200m.ascii.04of20',
 'pi200m.ascii.05of20',
 'pi200m.ascii.06of20',
 'pi200m.ascii.07of20',
 'pi200m.ascii.08of20',
 'pi200m.ascii.09of20',
 'pi200m.ascii.10of20',
 'pi200m.ascii.11of20',
 'pi200m.ascii.12of20',
 'pi200m.ascii.13of20',
 'pi200m.ascii.14of20',
 'pi200m.ascii.15of20']

# download the data files if they don't already exist:
In [8]: v.map(fetch_pi_file, files)

# This is the parallel calculation using the Client.map method
# which applies compute_two_digit_freqs to each file in files in parallel.
In [9]: freqs_all = v.map(compute_two_digit_freqs, files)

# Add up the frequencies from each engine.
In [10]: freqs = reduce_freqs(freqs_all)

In [11]: plot_two_digit_freqs(freqs)
Out[11]: <matplotlib.image.AxesImage object at 0x18beb110>

In [12]: plt.title('2 digit counts of 150m digits of pi')
Out[12]: <matplotlib.text.Text object at 0x18d1f9b0>
```

The resulting plot generated by Matplotlib is shown below. The colors indicate
which two digit sequences are more (red) or less (blue) likely to occur in the
first 150 million digits of pi. We clearly see that the sequence "41" is
most likely and that "06" and "07" are least likely. Further analysis would
show that the relative size of the statistical fluctuations have decreased
compared to the 10,000 digit calculation.

```{image} figs/two_digit_counts.*

```

## Conclusion

To conclude these examples, we summarize the key features of IPython's
parallel architecture that have been demonstrated:

- Serial code can be parallelized often with only a few extra lines of code.
  We have used the {class}`DirectView` and {class}`LoadBalancedView` classes
  for this purpose.
- The resulting parallel code can be run without ever leaving the IPython's
  interactive shell.
- Any data computed in parallel can be explored interactively through
  visualization or further numerical calculations.
- We have run these examples on a cluster running RHEL 5 and Sun GridEngine.
  IPython's built in support for SGE (and other batch systems) makes it easy
  to get started with IPython's parallel capabilities.