1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
|
A basic Python interface to CHOLMOD
SYNOPSIS
from scipy.sparse import csr_matrix
indptr = np.array([0, 2, 3, 6, 8])
indices = np.array([0, 2, 2, 0, 1, 2, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=float)
Jsparse = csr_matrix((data, indices, indptr))
Jdense = Jsparse.toarray()
print(Jdense)
===> [[1. 0. 2.]
[0. 0. 3.]
[4. 5. 6.]
[0. 7. 8.]]
bt = np.array(((1., 5., 3.), (2., -2., -8)))
print(nps.transpose(bt))
===> [[ 1. 2.]
[ 5. -2.]
[ 3. -8.]]
F = mrcal.CHOLMOD_factorization(Jsparse)
xt = F.solve_xt_JtJ_bt(bt)
print(nps.transpose(xt))
===> [[ 0.02199662 0.33953751]
[ 0.31725888 0.46982516]
[-0.21996616 -0.50648618]]
print(nps.matmult(nps.transpose(Jdense), Jdense, nps.transpose(xt)))
===> [[ 1. 2.]
[ 5. -2.]
[ 3. -8.]]
The core of the mrcal optimizer is a sparse linear least squares solver using
CHOLMOD to solve a large, sparse linear system. CHOLMOD is a C library, but it
is sometimes useful to invoke it from Python.
The CHOLMOD_factorization class factors a matrix JtJ, and this method uses that
factorization to efficiently solve the linear equation JtJ x = b. The usual
linear algebra conventions refer to column vectors, but numpy generally deals
with row vectors, so I talk about solving the equivalent transposed problem: xt
JtJ = bt. The difference is purely notational.
The class takes a sparse array J as an argument in __init__(). J is optional,
but there's no way in Python to pass it later, so from Python you should always
pass J. This is optional for internal initialization from C code.
J must be given as an instance of scipy.sparse.csr_matrix. csr is a row-major
sparse representation. CHOLMOD wants column-major matrices, so it see this
matrix J as a transpose: the CHOLMOD documentation refers to this as "At". And
the CHOLMOD documentation talks about factoring AAt, while I talk about
factoring JtJ. These are the same thing.
The factorization of JtJ happens in __init__(), and we use this factorization
later (as many times as we want) to solve JtJ x = b by calling
solve_xt_JtJ_bt().
This class carefully checks its input for validity, but makes no effort to be
flexible: anything that doesn't look right will result in an exception.
Specifically:
- J.data, J.indices, J.indptr must all be numpy arrays
- J.data, J.indices, J.indptr must all have exactly one dimension
- J.data, J.indices, J.indptr must all be C-contiguous (the normal numpy order)
- J.data must hold 64-bit floating-point values (dtype=float)
- J.indices, J.indptr must hold 32-bit integers (dtype=np.int32)
ARGUMENTS
The __init__() function takes
- J: a sparse array in a scipy.sparse.csr_matrix object
|