1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275
|
Calling PW library interface with these flags:
communicator index: 3
communicator size: 8
nimage: 1
npot: 1
npool: 2
ntaskg: 1
nband: 1
ndiag: 4
input: "/home/akohlmey/compile/espresso-qmmm/COUPLE/tests/metal.pw.in"
Program PWSCF v.5.1.a (svn rev. mpi-refactor) starts on 27Sep2013 at 11:51:40
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 8 processors
K-points division: npool = 2
R & G space division: proc/nbgrp/npool/nimage = 4
Reading input from /home/akohlmey/compile/espresso-qmmm/COUPLE/tests/metal.pw.in
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
Subspace diagonalization in iterative solution of the eigenvalue problem:
scalapack distributed-memory algorithm (size of sub-group: 2* 2 procs)
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 30 30 9 217 217 41
Max 31 31 10 218 218 44
Sum 121 121 37 869 869 169
bravais-lattice index = 2
lattice parameter (alat) = 7.5000 a.u.
unit-cell volume = 105.4688 (a.u.)^3
number of atoms/cell = 1
number of atomic types = 1
number of electrons = 3.00
number of Kohn-Sham states= 6
kinetic-energy cutoff = 15.0000 Ry
charge density cutoff = 60.0000 Ry
convergence threshold = 1.0E-06
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PZ NOGX NOGC ( 1 1 0 0 0)
celldm(1)= 7.500000 celldm(2)= 0.000000 celldm(3)= 0.000000
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( -0.500000 0.000000 0.500000 )
a(2) = ( 0.000000 0.500000 0.500000 )
a(3) = ( -0.500000 0.500000 0.000000 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( -1.000000 -1.000000 1.000000 )
b(2) = ( 1.000000 1.000000 1.000000 )
b(3) = ( -1.000000 1.000000 -1.000000 )
PseudoPot. # 1 for Al read from file:
/home/akohlmey/compile/espresso-qmmm/pseudo/Al.pz-vbc.UPF
MD5 check sum: 614279c88ff8d45c90147292d03ed420
Pseudo is Norm-conserving, Zval = 3.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 171 points, 2 beta functions with:
l(1) = 0
l(2) = 1
atomic species valence mass pseudopotential
Al 3.00 26.98000 Al( 1.00)
48 Sym. Ops., with inversion, found
Cartesian axes
site n. atom positions (alat units)
1 Al tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
number of k points= 10 Marzari-Vanderbilt smearing, width (Ry)= 0.0500
cart. coord. in units 2pi/alat
k( 1) = ( 0.1250000 0.1250000 0.1250000), wk = 0.0625000
k( 2) = ( 0.1250000 0.1250000 0.3750000), wk = 0.1875000
k( 3) = ( 0.1250000 0.1250000 0.6250000), wk = 0.1875000
k( 4) = ( 0.1250000 0.1250000 0.8750000), wk = 0.1875000
k( 5) = ( 0.1250000 0.3750000 0.3750000), wk = 0.1875000
k( 6) = ( 0.1250000 0.3750000 0.6250000), wk = 0.3750000
k( 7) = ( 0.1250000 0.3750000 0.8750000), wk = 0.3750000
k( 8) = ( 0.1250000 0.6250000 0.6250000), wk = 0.1875000
k( 9) = ( 0.3750000 0.3750000 0.3750000), wk = 0.0625000
k( 10) = ( 0.3750000 0.3750000 0.6250000), wk = 0.1875000
Dense grid: 869 G-vectors FFT dimensions: ( 15, 15, 15)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 0.00 Mb ( 29, 6)
NL pseudopotentials 0.00 Mb ( 29, 4)
Each V/rho on FFT grid 0.01 Mb ( 900)
Each G-vector array 0.00 Mb ( 217)
G-vector shells 0.00 Mb ( 30)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 0.01 Mb ( 29, 24)
Each subspace H/S matrix 0.00 Mb ( 12, 12)
Each <psi_i|beta_j> matrix 0.00 Mb ( 4, 6)
Arrays for rho mixing 0.11 Mb ( 900, 8)
Initial potential from superposition of free atoms
starting charge 2.99794, renormalised to 3.00000
Starting wfc are 4 randomized atomic wfcs + 2 random wfc
total cpu time spent up to now is 0.1 secs
per-process dynamical memory: 3.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 15.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 4.3
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 1.98E-04, avg # of iterations = 1.2
total cpu time spent up to now is 0.3 secs
total energy = -4.18547350 Ry
Harris-Foulkes estimate = -4.18624124 Ry
estimated scf accuracy < 0.00592498 Ry
iteration # 2 ecut= 15.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.97E-04, avg # of iterations = 1.0
total cpu time spent up to now is 0.3 secs
total energy = -4.18546703 Ry
Harris-Foulkes estimate = -4.18549537 Ry
estimated scf accuracy < 0.00046569 Ry
iteration # 3 ecut= 15.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.55E-05, avg # of iterations = 1.2
total cpu time spent up to now is 0.4 secs
End of self-consistent calculation
k = 0.1250 0.1250 0.1250 ( 107 PWs) bands (ev):
-2.7428 16.7431 20.1796 20.1796 23.2680 24.1724
k = 0.1250 0.1250 0.3750 ( 105 PWs) bands (ev):
-1.5642 13.6751 17.3099 18.8472 20.1257 22.7028
k = 0.1250 0.1250 0.6250 ( 102 PWs) bands (ev):
0.7488 11.5557 13.9822 15.3803 16.8437 20.9947
k = 0.1250 0.1250 0.8750 ( 104 PWs) bands (ev):
4.0828 8.6646 10.5472 14.4194 15.7421 20.0604
k = 0.1250 0.3750 0.3750 ( 100 PWs) bands (ev):
-0.4004 10.5636 15.0575 20.2794 22.2922 22.3024
k = 0.1250 0.3750 0.6250 ( 103 PWs) bands (ev):
1.8826 8.4273 12.9757 15.1047 21.3122 23.4591
k = 0.1250 0.3750 0.8750 ( 104 PWs) bands (ev):
5.1681 7.3418 9.7864 12.0728 20.3592 24.5663
k = 0.1250 0.6250 0.6250 ( 101 PWs) bands (ev):
4.1109 6.2842 10.9033 16.3672 18.2373 26.3764
k = 0.3750 0.3750 0.3750 ( 99 PWs) bands (ev):
0.7475 7.4153 19.3070 19.3070 21.3017 21.3022
k = 0.3750 0.3750 0.6250 ( 103 PWs) bands (ev):
3.0033 5.2361 16.0323 17.3399 19.1721 23.3127
the Fermi energy is 8.3513 ev
! total energy = -4.18546970 Ry
Harris-Foulkes estimate = -4.18546962 Ry
estimated scf accuracy < 0.00000026 Ry
The total energy is the sum of the following terms:
one-electron contribution = 2.94161250 Ry
hartree contribution = 0.01022684 Ry
xc contribution = -1.63496634 Ry
ewald contribution = -5.50183453 Ry
smearing contrib. (-TS) = -0.00050817 Ry
convergence has been achieved in 3 iterations
entering subroutine stress ...
total stress (Ry/bohr**3) (kbar) P= -14.54
-0.00009886 0.00000000 0.00000000 -14.54 0.00 0.00
0.00000000 -0.00009886 -0.00000000 0.00 -14.54 -0.00
0.00000000 -0.00000000 -0.00009886 0.00 -0.00 -14.54
Writing output data file pwscf.save
init_run : 0.02s CPU 0.05s WALL ( 1 calls)
electrons : 0.10s CPU 0.24s WALL ( 1 calls)
stress : 0.00s CPU 0.01s WALL ( 1 calls)
Called by init_run:
wfcinit : 0.01s CPU 0.02s WALL ( 1 calls)
potinit : 0.00s CPU 0.00s WALL ( 1 calls)
Called by electrons:
c_bands : 0.09s CPU 0.22s WALL ( 4 calls)
sum_band : 0.00s CPU 0.01s WALL ( 4 calls)
v_of_rho : 0.00s CPU 0.00s WALL ( 4 calls)
mix_rho : 0.00s CPU 0.00s WALL ( 4 calls)
Called by c_bands:
init_us_2 : 0.00s CPU 0.00s WALL ( 50 calls)
cegterg : 0.08s CPU 0.18s WALL ( 20 calls)
Called by *egterg:
h_psi : 0.01s CPU 0.05s WALL ( 63 calls)
g_psi : 0.00s CPU 0.00s WALL ( 38 calls)
cdiaghg : 0.06s CPU 0.12s WALL ( 53 calls)
Called by h_psi:
add_vuspsi : 0.00s CPU 0.00s WALL ( 63 calls)
General routines
calbec : 0.00s CPU 0.01s WALL ( 68 calls)
fft : 0.00s CPU 0.00s WALL ( 20 calls)
fftw : 0.01s CPU 0.05s WALL ( 798 calls)
davcio : 0.00s CPU 0.00s WALL ( 5 calls)
Parallel routines
fft_scatter : 0.00s CPU 0.04s WALL ( 818 calls)
PWSCF : 0.21s CPU 0.51s WALL
This run was terminated on: 11:51:40 27Sep2013
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Call to libpwscf finished with exit status 0
|