1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210
|
Overview
========
*SLEPc for Python* (slepc4py) is a Python package that provides
convenient access to the functionality of SLEPc.
SLEPc [1]_, [2]_ implements algorithms and tools for the numerical
solution of large, sparse eigenvalue problems on parallel
computers. It can be used for linear eigenvalue problems in either
standard or generalized form, with real or complex arithmetic.
It can also be used for computing a partial SVD of a large, sparse,
rectangular matrix, and to solve nonlinear eigenvalue problems
(polynomial or general). Additionally, SLEPc provides solvers for
the computation of the action of a matrix function on a vector.
SLEPc is intended for computing a subset of the spectrum of a matrix
(or matrix pair). One can for instance approximate the largest
magnitude eigenvalues, or the smallest ones, or even those eigenvalues
located near a given region of the complex plane. Interior eigenvalues
are harder to compute, so SLEPc provides different methodologies. One
such method is to use a spectral transformation. Cheaper alternatives
are also available.
.. [1] J. E. Roman, C. Campos, L. Dalcin, E. Romero, A. Tomas.
SLEPc Users Manual. DSIC-II/24/02 - Revision 3.24.
D. Sistemas Informaticos y Computacion, Universitat Politecnica de
Valencia. 2025.
.. [2] Vicente Hernandez, Jose E. Roman and Vicente Vidal.
SLEPc: A Scalable and Flexible Toolkit for the Solution of
Eigenvalue Problems, ACM Trans. Math. Softw. 31(3), pp. 351-362,
2005. https://doi.org/10.1145/1089014.1089019
.. include:: links.txt
Features
--------
Currently, the following types of eigenproblems can be addressed:
* Standard eigenvalue problem, :math:`Ax = \lambda x`, either for Hermitian or
non-Hermitian matrices.
* Generalized eigenvalue problem, :math:`Ax = \lambda Bx`, either Hermitian
positive-definite or not.
* Partial singular value decomposition of a rectangular matrix,
:math:`Au = \sigma v`.
* Generalized singular values of a matrix pair, :math:`Ax = cu`,
:math:`Bx = sv`.
* Polynomial eigenvalue problem, :math:`P(\lambda)=0`.
* Nonlinear eigenvalue problem, :math:`T(\lambda)=0`.
* Computing the action of a matrix function on a vector, :math:`w=f(\alpha A)v`.
For the linear eigenvalue problem, the following methods are available:
* Krylov eigensolvers, particularly Krylov-Schur, Arnoldi, and
Lanczos.
* Davidson-type eigensolvers, including Generalized Davidson and
Jacobi-Davidson.
* Subspace iteration and single vector iterations (inverse iteration,
RQI).
* Conjugate gradient methods such as LOBPCG.
* A contour integral solver using high-order moments.
For singular value computations, the following alternatives can be
used:
* Use an eigensolver via the cross-product matrix :math:`A^*A` or the cyclic
matrix :math:`\left[\begin{smallmatrix}0&A\\A^*&0\end{smallmatrix}\right]`.
* Explicitly restarted Lanczos bidiagonalization.
* Implicitly restarted Lanczos bidiagonalization (thick-restart
Lanczos).
* A basic randomized solver.
For polynomial eigenvalue problems, the following methods are available:
* Use an eigensolver to solve the generalized eigenvalue problem
obtained after linearization.
* TOAR and Q-Arnoldi, memory efficient variants of Arnoldi for polynomial
eigenproblems.
* Jacobi-Davidson for polynomial eigenproblems.
* A contour integral solver using high-order moments.
For general nonlinear eigenvalue problems, the following methods can be used:
* Solve a polynomial eigenproblem obtained via polynomial interpolation.
* Rational interpolation and linearization (NLEIGS).
* Newton-type methods such as SLP or RII.
* A subspace projection method (nonlinear Arnoldi).
* A contour integral solver using high-order moments.
Computation of interior eigenvalues is supported by means of the
following methodologies:
* Spectral transformations, such as shift-and-invert. This technique
implicitly uses the inverse of the shifted matrix :math:`A-\sigma I` in order
to compute eigenvalues closest to a given target value, :math:`\sigma`.
* Harmonic extraction, a cheap alternative to shift-and-invert that
also tries to approximate eigenvalues closest to a target, :math:`\sigma`, but
without requiring a matrix inversion.
Other remarkable features include:
* High computational efficiency, by using NumPy and SLEPc under the
hood.
* Data-structure neutral implementation, by using efficient sparse
matrix storage provided by PETSc. Implicit matrix representation is
also available by providing basic operations such as matrix-vector
products as user-defined Python functions.
* Run-time flexibility, by specifying numerous setting at the command
line.
* Ability to do the computation in parallel and/or using GPUs.
Components
----------
SLEPc provides the following components, which are mirrored by slepc4py
for its use from Python. The first six components are solvers for
different classes of problems, while the rest can be considered
auxiliary object.
:EPS: The Eigenvalue Problem Solver is the component that provides all
the functionality necessary to define and solve an
eigenproblem. It provides mechanisms for completely specifying
the problem: the problem type (e.g. standard symmetric), number
of eigenvalues to compute, part of the spectrum of
interest. Once the problem has been defined, a collection of
solvers can be used to compute the required solutions. The
behavior of the solvers can be tuned by means of a few
parameters, such as the maximum dimension of the subspace to be
used during the computation.
:SVD: This component is the analog of ``EPS`` for the case of Singular
Value Decompositions. The user provides a rectangular matrix and
specifies how many singular values and vectors are to be
computed, whether the largest or smallest ones, as well as some
other parameters for fine tuning the computation. Different
solvers are available, as in the case of ``EPS``.
:PEP: This component is the analog of ``EPS`` for the case of Polynomial
Eigenvalue Problems. The user provides the coefficient matrices of
the polynomial. Several parameters can be specified, as in
the case of ``EPS``. It is also possible to indicate whether the
problem belongs to a special type, e.g., symmetric or gyroscopic.
:NEP: This component covers the case of general nonlinear eigenproblems,
:math:`T(\lambda)=0`. The user provides the parameter-dependent
matrix :math:`T` via the split form or by means of callback functions.
:MFN: This component provides the functionality for computing the action
of a matrix function on a vector. Given a matrix :math:`A` and a
vector :math:`b`, the call ``MFNSolve(mfn,b,x)`` computes
:math:`x=f(A)b`, where :math:`f` is a function such as the exponential.
:LME: This component provides the functionality for solving linear matrix
equations such as Lyapunov or Sylvester where the solution has low
rank.
:ST: The Spectral Transformation is a component that provides
convenient implementations of common spectral
transformations. These are simple transformations that map
eigenvalues to different positions, in such a way that
convergence to wanted eigenvalues is enhanced. The most common
spectral transformation is shift-and-invert, that allows for the
computation of eigenvalues closest to a given target value.
:BV: This component encapsulates the concept of a set of Basis Vectors
spanning a vector space. This component provides convenient access
to common operations such as orthogonalization of vectors. The
``BV`` component is usually not required by end-users.
:DS: The Dense System (or Direct Solver) component, used internally to
solve dense eigenproblems of small size that appear in the course
of iterative eigensolvers.
:FN: A component used to define mathematical functions. This is required
by the end-user for instance to define function :math:`T(\cdot)` when
solving nonlinear eigenproblems with ``NEP`` in split form.
:RG: A component used to define a region of the complex plane such as an
ellipse or a rectangle. This is required by end-users in some cases
such as contour-integral eigensolvers.
In addition to the above components, some extra functionality is provided
in the ``Sys`` and ``Util`` sections.
|