File: lesson_paral_dfpt.html

package info (click to toggle)
abinit 7.8.2-2
  • links: PTS, VCS
  • area: main
  • in suites: jessie, jessie-kfreebsd
  • size: 278,292 kB
  • ctags: 19,095
  • sloc: f90: 463,759; python: 50,419; xml: 32,095; perl: 6,968; sh: 6,209; ansic: 4,705; fortran: 951; objc: 323; makefile: 43; csh: 42; pascal: 31
file content (430 lines) | stat: -rw-r--r-- 23,386 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
<html>
<head>
<title>Tutorial on the parallelism in the linear-response case.</title>
</head>
<body bgcolor="#ffffff">
<hr>
<h1>ABINIT tutorial : parallelism in the linear-response case.</h1>
<hr>
<p>This lesson aims at showing how to use the parallelism for all the properties that are computed
on the basis of the linear-response part of ABINIT, i.e. :
  <ul>
    <li>responses to atomic displacements (to compute phonon frequencies, band structures, ... )
    <li>responses to homogeneous electric fields (dielectric constant, Born effective charges, Infra-red characteristics )
    <li>responses to strain (elastic constants, piezoelectric constants ...)
  </ul>
  Such computations are realized when one of the input variables 
   <a href="../input_variables/varrf.html#rfphon">rfphon</a>, 
   <a href="../input_variables/varrf.html#rfelfd">rfelfd</a>  or 
   <a href="../input_variables/varrf.html#rfstrs">rfstrs</a>
    are non-zero,
   which activates <a href="../input_variables/vargs.html#optdriver">optdriver</a>=1.
  You are supposed to be well-familiarized with such calculations before starting the present tutorial.
  See the input variables described  in the file <A href="../input_variables/varrf.html">VARRF</a>
  and the tutorial <a href="lesson_rf1.html">Response-Function 1</a> and subsequent tutorials.
<p>
  You will learn about the basic implementation of parallelism for linear-response calculations,
  then will execute a very simple, quick calculation for one dynamical matrix for FCC aluminum, whose
  scaling is limited to a few dozen computing cores, then you will execute
  a calculation whose scaling is much better, but that would take
  a few hours in sequential, using a provided input file. 
  For the last section of that part, you would be better off having access to more than 100 computing cores, although
  you might also change the input parameters to adjust to the machine you have at hand.
  For the other parts of the tutorial, a 16-computing-core machine is recommended, 
  in order to perform the scalability measurements.
<p>This lesson should take less than two hours to be done if a powerful parallel computer is available. </p>
<p>You are supposed to know already some basics of parallelism in ABINIT, explained in the tutorial
<a href="lesson_basepar.html">A first introduction to ABINIT in parallel</a>.

<h5>Copyright (C) 2000-2014 ABINIT group (XG,RC)
<br> This file is distributed under the terms of the GNU General Public License, see
~abinit/COPYING or <a href="http://www.gnu.org/copyleft/gpl.txt">
http://www.gnu.org/copyleft/gpl.txt </a>.
<br> For the initials of contributors, see ~abinit/doc/developers/contributors.txt .
</h5>

<script type="text/javascript" src="list_internal_links.js"> </script>

<h3><b>Contents of lesson paral_dfpt:</b></h3>

<ul>
  <li><a href="lesson_paral_dfpt.html#1">1</a>. The structure of the parallelism for the linear-response (Density-Functional Perturbation Theory) part of ABINIT
  <li><a href="lesson_paral_dfpt.html#2">2</a>. Computation of one dynamical matrix (q =0.25 -0.125 0.125) for FCC aluminum 
  <li><a href="lesson_paral_dfpt.html#3">3</a>. Computation of one perturbation for a slab of 29 atoms of barium titanate
</ul>
<hr>
<h3><b>
<a name="1">1. The structure of the parallelism for the linear-response (Density-Functional Perturbation Theory) part of ABINIT</a> 
</b> </h3>

<b>1.1.</b> Let us recall first the basic structure of typical linear-response calculations. This is summarized in box 1.
<br>
<img style="height: 550px; width: 900px;" alt="Schema 1"
 src="lesson_paral_dfpt/DFPT_Structure.png"><br>
The step 1 is done in a preliminary ground-state calculation (either by an independent run,
or by computing them using an earlier dataset before the linear-response calculation).
The parallelisation of this step is examined in a separate lesson.
<br>
<p>
The step 2 and step 3 are the time-consuming linear-response steps, to which the present tutorial is dedicated,
and for which the implementation of the parallelism will be explained. They generate different files, and in particular,
one (or several) DDB file(s). As explained in related tutorials (see e.g. <a href="lesson_rf1.html">Response-Function 1</a>), 
several perturbations are usually treated in one dataset (hence the "Do for each perturbation"-loop in step 2 of this Schema). For example, 
in one dataset, although one considers only one phonon wavevector, all the primitive atomic displacements for this wavevector 
(as determined by the symmetries) can be treated in turn.
<br>
<p>
The step 4 refers to MRGDDB and ANADDB post-processing of the DDB generated by steps 2 and 3. These are not time-consuming 
sections. No parallelism is needed.
<p>

<b>1.2.</b> The equations to be considered for the computation of the first-order wavefunctions and potentials (step 2 of box 1),
which is the really time-consuming part of any linear-response calculation, are presented in box 2.
<br>
<img style="height: 550px; width: 900px;" alt="Schema 2"
 src="lesson_paral_dfpt/DFPT_Equations.png"><br>

The parallelism currently implemented for the linear-response part of ABINIT is based on
the parallel distribution of the most important arrays: the set of first-order and ground-state wavefunctions.
<br>
While each such wavefunction is stored completely in the memory linked to one computing core (there is no splitting
of different parts of one wavefunction between different computing cores - neither in real space nor in reciprocal space), 
different such wavefunctions might be distributed over different computing cores.
This easily achieves a combined k point, spin and band index parallelism, as explained in box 3. 
<br>
<img style="height: 550px; width: 900px;" alt="Schema 3"
 src="lesson_paral_dfpt/1WFs.png"><br>

<b>1.3.</b> The parallelism over k points, band index and spin for the set of wavefunctions is implemented in all relevant steps,
except for the reading (initialisation) of the ground-state wavefunctions (from an external file). Thus, the most CPU time-consuming
parts of the linear-response computations are parallelized. 
<br>
The following are not parallelized :
<ul>
 <li>input of the set of ground-state wavefunctions</li>
 <li>computing the first-order change of potential from the first-order density, 
    and similar operations that do not depend on the bands or k points.</li>
</ul>
In the case of small systems, the maximum achievable speed-up will be limited by the input of the set of ground-state wavefunctions.
For medium to large systems, the maximum achievable speed-up will be determined by the operations that do not depend 
on the k point and band indices.
<br>
Finally, the distribution of the set of ground-state wavefunctions to the computing cores
	that need them is partly parallelized, as no parallelism over bands can be exploited.
<br style="font-style: italic;">
<hr style="width: 100%; height: 2px; font-style: italic;"><span style="font-style: italic;"><br>
Before continuing you might work in a different subdirectory as
for the other lessons. Why not "work_paral_dfpt" ? 
All the input files can
be found in the ~abinit/tests/tutoparal/Input directory.
You might have to adapt them to the path of the directory in which you have decided to perform your runs.
You can compare your
results with reference output files located in
~abinit/tests/tutoparal/Refs.<br>
<br>
In the following, when "(mpirun ...) abinit" appears, you have
to use a specific command line according to the operating system and
architecture of the computer you are using. This can be for instance:
mpirun -np 16 abinit <br>
</span><span style="font-weight: bold;"></span>

<p><hr>
<h3><b>
<a name="2">2. Computation of one dynamical matrix (q =0.25 -0.125 0.125) for FCC aluminum</a>
</b> </h3>

We start by treating the case of a small systems, namely FCC aluminum, for which there is only one atom
per unit cell. Of course, many k points are needed. 
<br>
The input files can be found in the directory ~abinit/tests/tutoparal/Input .
<p>
<br>
<b>2.1.</b> The first step is the precomputation of the ground state wavefunctions.
This is driven by the files tdfpt_01.files (and tdfpt_01.in).
You should edit them and  examine them. 
<br>
One relies on a k-point grid of 8x8x8 x 4 shifts (=2048 k points), and 5 bands.
For this ground-state calculation, symmetries can be used to reduce drastically the number of k points :
there are 60 k points in the irreducible Brillouin zone (this cannot be deduced from the examination of the 
input file, though). This calculation is very fast, actually.
<br>
You can launch it :
<br>
<pre>
(mpirun ...)  abinit < tdfpt_01.files > tdfpt_01.log
</pre>
A reference output file is available in ~abinit/tests/tutoparal/Refs , under the name tdfpt_01.out .
It was obtained using 4 computing cores, and took a few seconds.

<p>
<b>2.2.</b> The second step is the linear-response calculation, for which the files are 
tdfpt_02.files	(and tdfpt_02.in).
<br>
There are three perturbations (three atomic displacements). For the two first perturbations, no symmetry can be used,
while for the third, two symmetries can be used to reduce the number of k points to 1024.
Hence, for the perfectly scalable sections of the code, the maximum speed up is 5120 (=1024 k points * 5 bands),
if you have access to 5120 computing cores. 
However, the sequential parts of the code dominate at a much, much lower value.
Indeed, the sequential parts is actually a few percents of the
code on one processor, depending on the machine you run. The speed-up might saturate beyond 4 and 8 (depending on the machine).
<br>
First copy the output of the ground-state calculation so that it can be used as the input of the 
linear-response calculation :
<pre>
cp tdfpt_01.o_WFK tdfpt_02.i_WFK
cp tdfpt_01.o_WFK tdfpt_02.i_WFQ
</pre>
Then, you can launch the calculation :
<br>
<pre>
(mpirun ...)  abinit < tdfpt_02.files > tdfpt_02.log
</pre>

A reference output file is given in ~tests/tutoparal/Refs , under the name tdfpt_02.out .
Edit it, and examine some information ...
<br> The calculation has been made with four computing cores :
<pre>
-   nproc =    4
</pre>
The wall clock time is less than 50 seconds :
<pre>
-
- Proc.   0 individual time (sec): cpu=         48.5  wall=         48.5

================================================================================

 Calculation completed.
.Delivered    0 WARNINGs and   3 COMMENTs to log file.
+Overall time at end (sec) : cpu=        194.1  wall=        194.1
</pre>
The major result is the phonon frequencies :
<pre>
  Phonon wavevector (reduced coordinates) :  0.25000 -0.12500  0.12500
 Phonon energies in Hartree :
   6.944980E-04  7.756637E-04  1.145943E-03
 Phonon energies in meV     :
-  1.889825E+01  2.110688E+01  3.118270E+01
 Phonon frequencies in cm-1    :
-  1.524247E+02  1.702385E+02  2.515054E+02
 Phonon frequencies in Thz     :
-  4.569578E+00  5.103622E+00  7.539943E+00
 Phonon energies in Kelvin  :
-  2.193049E+02  2.449349E+02  3.618597E+02
</pre>
<p>
<b>2.3.</b> Because this test case is quite fast, you should play a bit with it. In particular,
run it several times, with an increasing number of computing cores (let's say, up to 32 computing cores, at which stage you should have obtained
a saturation of the speed-up).
<p>
You should be able to obtain the following.
<br>
<ol>
<li>The result is independent (to an exquisite accuracy)  of the number of computing cores that is used</li>
<li>The timing section reveals that the reading of the ground-state wavefunction file is the limiting step for the parallelisation </li>
</ol>
<p>
Concerning the latter, you will need to understand, in the output file, the timing section. It is present a bit before
the end of the output file :
<pre>
-
- For major independent code sections, cpu and wall times (sec),
-  as well as % of the time and number of calls for node 0-
- routine                        cpu     %       wall     %      number of calls
- fourwf(pot)                   19.834  10.2     19.794  10.2         187989
- inwffil                        7.387   3.8      7.390   3.8             10
- fourwf(G->r)                   6.721   3.5      6.869   3.5         116049
- cgwf3-O(npw)                   2.226   1.1      2.228   1.1             -1
- vtorho3-kpt loop               2.181   1.1      2.163   1.1             33
- nonlop(forces)                 1.924   1.0      1.947   1.0          90880
- projbd                         1.775   0.9      1.734   0.9         318634
- nonlop(apply)                  1.690   0.9      1.719   0.9         116309
- vtowfk3(contrib)               1.654   0.9      1.556   0.8             -1
- 61   others                    2.819   1.5      2.803   1.4
-
- subtotal                      48.211  24.8     48.203  24.8

- For major independent code sections, cpu and wall times (sec),
- as well as % of the total time and number of calls

- routine                         cpu     %       wall     %      number of calls
-                                                                  (-1=no count)
- fourwf(pot)                   79.067  40.7     79.326  40.9         752230
- inwffil                       29.548  15.2     29.560  15.2             40
- fourwf(G->r)                  25.828  13.3     25.898  13.3         447552
- cgwf3-O(npw)                   9.096   4.7      9.081   4.7             -4
- vtorho3-kpt loop               8.672   4.5      8.611   4.4            132
- nonlop(forces)                 7.707   4.0      7.716   4.0         363520
- projbd                         6.925   3.6      6.668   3.4        1275084
- nonlop(apply)                  6.810   3.5      6.864   3.5         465510
- vtowfk3(contrib)               6.280   3.2      6.199   3.2             -4
- getghc-other                   3.176   1.6      3.195   1.6             -4
- status                         2.615   1.3      2.534   1.3         919162
- vtorho3:synchro                2.029   1.0      2.040   1.1            132
- 58   others                    4.175   2.2      4.217   2.2

- subtotal                     191.928  98.9    191.909  98.9
</pre>
It is made of two groups of data. The first one corresponds to the analysis of the timing
for the computing core (node) number 0 . The second one is the sum over all computing cores
of the data of the first group. Note that there is a factor of four between these two groups,
reflecting that the load balance is good.
<p>
Let's examine the second group of data in more detail. It corresponds to a decomposition 
of the most time-consuming parts of the code. Note that the subtotal is 98.9 percent, thus
the statistics is quite good. Without going into the detail of each routine,
for the present purpose, the most significant information is that among all the
timed sections of the code, only "inwffil" and "vtorho3:synchro" will not benefit from
parallelism.
<br>
"inwffil" is a subroutine whose job is to read the ground-state wavefunctions
(you can find the source of the "inwffil" routine in <a href="http://www.abinit.org/downloads/browse-lastest-sources">the latest ABINIT sources</a>).
As mentioned in the section 1, the reading of the ground-state wavefunctions is not done in parallel in the case of the linear-response
computations (note that the reading is actually parallelized for e.g. ground-state calculations).
In the output file provided as a reference (with four computing cores), the "inwffil" wall clock time 
is 7.387 seconds, on a total of 48.211 secs. By increasing the number of computing cores, it will be
possible to decrease the total time, but not below the value of 7.387 seconds in any case.
You should observe a similar behaviour with your own tests.
<hr>
<h3><b>
<a name="3">3. Computation of one perturbation for a slab of 29 atoms of barium titanate</a>
</b> </h3>

<b>3.1.</b> This test, with 29 atoms, is slower, but scales better than the Al FCC case. It consists in the
computation of one perturbation at qpt 0.0 0.25 0.0 for a 29 atom slab of barium titanate, artificially terminated
by a double TiO2 layer on each face, with a reasonable
k-point sampling of the Brillouin zone.
The symmetry of the system and perturbation will allow to decrease this sampling to one quarter of the Brillouin zone.
E.g. with the k-point sampling ngkpt 4 4 1 , there will be actually 4 k-points in the irreducible Brillouin zone for the Ground state calculations.
For the response-function case, only one symmetry will survive, so that, after the calculation of the frozen-wavefunction part
(for which no symmetry is used), the self-consistent part will be done with 8 k points in the corresponding irreducible Brillouin zone.
With the sampling 8 8 1, there will be 32 k points in the irreducible Brillouin zone for the linear-response case.
There are 120 bands.
Note that the value of <a href="../input_variables/varbas.html#ecut">ecut</a> that is used in the present
tutorial is too low to obtain physical results (it should be around 40 Hartree).
<p>
As in the previous case, a preparatory ground-state calculation is needed.
<p>The input files are provided, in the directory ~abinit/tests/tutoparal/Input . 
The preparatory step is governed by tdfpt_03.files (and tdfpt_03.in). The
real (=linear-response) test case is governed by tdfpt_04.files (and tdfpt_04.in).
The reference output files are present in ~abinit/tests/tutoparal/Refs :
tdfpt_0324.out and tdfpt_0432.out . The naming convention is such that the number of cores
used to run them is added after the name of the test : the tdfpt_03.in file was run with 24 cores,
while the tdfpt_04.in was run with 32 cores.
The preparatory step took about 5 minutes, and the linear-response step took about 5 minutes as well.
You can run now these test cases. For tdfpt_03, you might need to change the npband value (presently 6), if you are not
using 24 processors. At variance, for tdfpt_04, no adaptation of the input file is needed to be able to run on an arbitrary
number of processors.

<br>To launch the ground-state computation, type :
<br>
<pre>
(mpirun ...)  abinit < tdfpt_03.files > tdfpt_03.log
</pre>
<p>
then copy the output of the ground-state calculation so that it can be used as the input of the
linear-response calculation :
<pre>
cp tdfpt_03.o_WFK tdfpt_04.i_WFK
cp tdfpt_03.o_WFK tdfpt_04.i_WFQ
</pre>
and launch the calculation :
<br>
<pre>
(mpirun ...)  abinit < tdfpt_04.files > tdfpt_04.log
</pre>
Now, examine the obtained output file for test 04, especially the timing.
<p>
In the reference file ~abinit/tests/tutoparal/Refs/tdfpt_0432.out , with 32 computing cores,
the timing section delivers :
<pre>
- For major independent code sections, cpu and wall times (sec),
- as well as % of the total time and number of calls

- routine                         cpu     %       wall     %      number of calls
-                                                                  (-1=no count)
- projbd                      3046.599  34.0   3065.259  34.0         171639
- fourwf%(pot)                2545.342  28.4   2561.313  28.4         103098
- nonlop(apply)               1059.279  11.8   1066.034  11.8          85818
- fourwf%(G->r)                531.322   5.9    534.929   5.9          50112
- vtorho3:synchro              444.450   5.0    448.598   5.0            576
- nonlop(forces)               195.017   2.2    195.487   2.2         100800
- newkpt(excl. rwwf   )        179.142   2.0    179.185   2.0            -32
- vtowfk3(contrib)             137.486   1.5    137.878   1.5            -32
- pspini                        97.276   1.1     99.901   1.1             32

<...>

- 45   others                    0.000   0.0      0.000   0.0

- subtotal                    8760.405  97.8   8811.874  97.8
</pre>
<p>
You will notice that the sum of the major independent code sections is again very close to 100%.
You might now explore the behaviour of the CPU time for different numbers of compute cores
(consider values below and above 32 processors). 
Some time-consuming routines will benefit from the parallelism, some other will not.
<p>
The kpoint+band parallelism will efficiently work for many important sections of the code :
projbd, fourwf%(pot), nonlop(apply), fourwf%(G->r). In this test,
the product nkpt (the effective number of k points for the current perturbation) times nband is
8*120=960. Of course, the total speed-up will saturate well below this value, as there are some
non-parallelized sections of the code.
<p>
In the above-mentioned list, the kpoint+band parallelism cannot be exploited
(or is badly exploited) in several sections of the code :
"vtorho3:synchro", about 5 percents of the total time of the run on 32 processors,
"newkpt(excl. rwwf)", about 2 percents, vtowfk3(contrib), about 1.5 percent,
"pspini", about 1 percent. This amounts to about 10% of the total, and, according
to Amdahl's law, the saturation will happen soon, with less than 100 processors.
<p>

<b>3.2.</b> A better parallelism can be seen if the number of k-points is brought back to a converged value (8x8x1).
Try this if you have more than 100 processors at hand.
<br> Set in your input file tdfpt_03.in :
<pre>
   ngkpt 8 8 1    ! This should replace ngkpt 4 4 1 
   npkpt 16       ! This should replace npkpt 4
</pre>
Also, set in tdfpt_04.in :
<pre>
   ngkpt 8 8 1    ! This should replace ngkpt 4 4 1
</pre>
and launch again the preliminary step, then the linear-response step.
Then, you can practice the linear-response calculation by varying the number of computing cores. For the latter,
you could even consider varying the number of self-consistent iterations
to see the initialisation effects (small value of nstep), or target a value giving converged results (nstep 50 instead of nstep 18). 
The energy cut-off might also be increased (e.g. ecut 40 Hartree gives a much better value).
Indeed, with a large value of k points, and large value of nstep, you should be able to obtain 
a speed-up of more than one hundred for the linear-response calculation,
when compared to a sequential run (see below). 
Keep track of the time for each computing core number, to observe the scaling.

<p>
As a typical observation, the Wall clock timing decreases from 
<pre>
- Proc.   0 individual time (sec): cpu=       2977.3  wall=       2977.3
</pre>
with 16 processors to
<pre>
- Proc.   0 individual time (sec): cpu=        513.2  wall=        513.2
</pre>
with 128 processors.
<p>
The next figure presents the speed-up of a typical calculation with increasing number of computing cores
(also, the efficiency of the calculation).
<img style="height: 550px; width: 900px;" alt="Schema 1"
 src="lesson_paral_dfpt/Speedup.jpeg"><br>
Beyond 300 computing cores, the sequential parts of the code start to dominate.
With more realistic computing parameters (ecut 40), they
dominate only beyond 600 processors.
<p>
This last example is the end of the present tutorial. You have been explained the
basics of the current implementation of the parallelism for the linear-response
part of ABINIT, then you have explored two test cases : one for a small cell materials,
with lots of k points, and another one, medium-size, in which the k point and band parallelism
can be used efficiently even for more than one hundred computing cores.

<script type="text/javascript" src="list_internal_links.js"> </script>

</body>
</html>