File: MatCreateMPISBAIJ.html

package info (click to toggle)
petsc 3.2.dfsg-6
  • links: PTS, VCS
  • area: main
  • in suites: wheezy
  • size: 124,660 kB
  • sloc: ansic: 342,250; cpp: 62,975; python: 32,761; fortran: 17,337; makefile: 15,867; xml: 621; objc: 594; sh: 492; java: 381; f90: 347; csh: 245
file content (162 lines) | stat: -rw-r--r-- 11,714 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>MatCreateMPISBAIJ</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
<A NAME="MatCreateMPISBAIJ"><H1>MatCreateMPISBAIJ</H1></A>
Creates a sparse parallel matrix in symmetric block AIJ format (block compressed row).  For good matrix assembly performance the user should preallocate the matrix storage by setting the parameters  d_nz (or d_nnz) and o_nz (or o_nnz).  By setting these parameters accurately, performance can be increased by more than a factor of 50. 
<H3><FONT COLOR="#CC3333">Synopsis</FONT></H3>
<PRE>
#include "petscmat.h" 
PetscErrorCode  MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt o_nnz[],Mat *A)
</PRE>
Collective on <A HREF="../Sys/MPI_Comm.html#MPI_Comm">MPI_Comm</A>
<P>
<H3><FONT COLOR="#CC3333">Input Parameters</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>comm </B></TD><TD>- MPI communicator
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>bs   </B></TD><TD>- size of blockk
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>m </B></TD><TD>- number of local rows (or <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> to have calculated if M is given)
This value should be the same as the local size used in creating the 
y vector for the matrix-vector product y = Ax.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>n </B></TD><TD>- number of local columns (or <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> to have calculated if N is given)
This value should be the same as the local size used in creating the 
x vector for the matrix-vector product y = Ax.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>M </B></TD><TD>- number of global rows (or <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> to have calculated if m is given)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>N </B></TD><TD>- number of global columns (or <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> to have calculated if n is given)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>d_nz  </B></TD><TD>- number of block nonzeros per block row in diagonal portion of local 
submatrix  (same for all local rows)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>d_nnz </B></TD><TD>- array containing the number of block nonzeros in the various block rows 
in the upper triangular portion of the in diagonal portion of the local 
(possibly different for each block block row) or <A HREF="../Sys/PETSC_NULL.html#PETSC_NULL">PETSC_NULL</A>.  
You must leave room for the diagonal entry even if it is zero.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>o_nz  </B></TD><TD>- number of block nonzeros per block row in the off-diagonal portion of local
submatrix (same for all local rows).
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>o_nnz </B></TD><TD>- array containing the number of nonzeros in the various block rows of the
off-diagonal portion of the local submatrix (possibly different for
each block row) or <A HREF="../Sys/PETSC_NULL.html#PETSC_NULL">PETSC_NULL</A>.
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Output Parameter</FONT></H3>
<DT><B>A </B> -the matrix 
<br>
<P>
<H3><FONT COLOR="#CC3333">Options Database Keys</FONT></H3>
<DT><B>-mat_no_unroll </B> -uses code that does not unroll the loops in the 
block calculations (much slower)
<br>
<DT><B>-mat_block_size </B> -size of the blocks to use
<br>
<DT><B>-mat_mpi </B> -use the parallel matrix data structures even on one processor 
(defaults to using SeqBAIJ format on one processor)
<br>
<P>
It is recommended that one use the <A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(), <A HREF="../Mat/MatSetType.html#MatSetType">MatSetType</A>() and/or <A HREF="../Mat/MatSetFromOptions.html#MatSetFromOptions">MatSetFromOptions</A>(),
MatXXXXSetPreallocation() paradgm instead of this routine directly.
[MatXXXXSetPreallocation() is, for example, <A HREF="../Mat/MatSeqAIJSetPreallocation.html#MatSeqAIJSetPreallocation">MatSeqAIJSetPreallocation</A>]
<P>
<H3><FONT COLOR="#CC3333">C++ variants</FONT></H3><TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,0,PETSC_NULL,0,PETSC_NULL,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,const PetscInt nnz[],const PetscInt onz[],Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,0,nnz,0,onz,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,PetscInt nnz,Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,nz,PETSC_NULL,nnz,PETSC_NULL,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,0,PETSC_NULL,0,PETSC_NULL,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,const PetscInt nnz[],const PetscInt onz[])<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,0,nnz,0,onz,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,PetscInt nnz)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,nz,PETSC_NULL,nnz,PETSC_NULL,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,const PetscInt nnz[],PetscInt onz,const PetscInt onnz[])<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(PETSC_COMM_WORLD,bs,m,n,M,N,nz,nnz,onz,onnz,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,0,PETSC_NULL,0,PETSC_NULL,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,const PetscInt nnz[],const PetscInt onz[],Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,0,nnz,0,onz,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD>&nbsp MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,PetscInt nnz,Mat *A)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,nz,PETSC_NULL,nnz,PETSC_NULL,A)</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,0,PETSC_NULL,0,PETSC_NULL,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,const PetscInt nnz[],const PetscInt onz[])<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,0,nnz,0,onz,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,PetscInt nnz)<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,nz,PETSC_NULL,nnz,PETSC_NULL,&A); return A;</TR></TD>

<TR><TD WIDTH=40></TD><TD> Mat MatCreateMPISBAIJ(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt nz,const PetscInt nnz[],PetscInt onz,const PetscInt onnz[])<TD WIDTH=20></TD><TD>-></TD><TD WIDTH=20></TD><TD>MatCreateMPISBAIJ(comm,bs,m,n,M,N,nz,nnz,onz,onnz,&A); return A;</TR></TD>

</TABLE>
<H3><FONT COLOR="#CC3333">Notes</FONT></H3>
The number of rows and columns must be divisible by blocksize.
This matrix type does not support complex Hermitian operation.
<P>
The user MUST specify either the local or global matrix dimensions
(possibly both).
<P>
If <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> or  <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> is used for a particular argument on one processor
than it must be used on all processors that share the object for that argument.
<P>
If the *_nnz parameter is given then the *_nz parameter is ignored
<P>
<H3><FONT COLOR="#CC3333">Storage Information</FONT></H3>
For a square global matrix we define each processor's diagonal portion
to be its local rows and the corresponding columns (a square submatrix);
each processor's off-diagonal portion encompasses the remainder of the
local matrix (a rectangular submatrix).
<P>
The user can specify preallocated storage for the diagonal part of
the local submatrix with either d_nz or d_nnz (not both).  Set
d_nz=<A HREF="../Sys/PETSC_DEFAULT.html#PETSC_DEFAULT">PETSC_DEFAULT</A> and d_nnz=<A HREF="../Sys/PETSC_NULL.html#PETSC_NULL">PETSC_NULL</A> for PETSc to control dynamic
memory allocation.  Likewise, specify preallocated storage for the
off-diagonal part of the local submatrix with o_nz or o_nnz (not both).
<P>
Consider a processor that owns rows 3, 4 and 5 of a parallel matrix. In
the figure below we depict these three local rows and all columns (0-11).
<P>
<PRE>
           0 1 2 3 4 5 6 7 8 9 10 11
          -------------------
   row 3  |  o o o d d d o o o o o o
   row 4  |  o o o d d d o o o o o o
   row 5  |  o o o d d d o o o o o o
          -------------------
</PRE>

<P>
Thus, any entries in the d locations are stored in the d (diagonal)
submatrix, and any entries in the o locations are stored in the
o (off-diagonal) submatrix.  Note that the d matrix is stored in
MatSeqSBAIJ format and the o submatrix in <A HREF="../Mat/MATSEQBAIJ.html#MATSEQBAIJ">MATSEQBAIJ</A> format.
<P>
Now d_nz should indicate the number of block nonzeros per row in the upper triangular
plus the diagonal part of the d matrix,
and o_nz should indicate the number of block nonzeros per row in the o matrix.
In general, for PDE problems in which most nonzeros are near the diagonal,
one expects d_nz &gt;&gt; o_nz.   For large problems you MUST preallocate memory
or you will get TERRIBLE performance; see the users' manual chapter on
matrices.
<P>

<P>
<H3><FONT COLOR="#CC3333">Keywords</FONT></H3>
 matrix, block, aij, compressed row, sparse, parallel
<BR>
<P>
<H3><FONT COLOR="#CC3333">See Also</FONT></H3>
 <A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(), <A HREF="../Mat/MatCreateSeqSBAIJ.html#MatCreateSeqSBAIJ">MatCreateSeqSBAIJ</A>(), <A HREF="../Mat/MatSetValues.html#MatSetValues">MatSetValues</A>(), <A HREF="../Mat/MatCreateMPIBAIJ.html#MatCreateMPIBAIJ">MatCreateMPIBAIJ</A>()
<BR><P><B><P><B><FONT COLOR="#CC3333">Level:</FONT></B>intermediate
<BR><FONT COLOR="#CC3333">Location:</FONT></B><A HREF="../../../src/mat/impls/sbaij/mpi/mpisbaij.c.html#MatCreateMPISBAIJ">src/mat/impls/sbaij/mpi/mpisbaij.c</A>
<BR><A HREF="./index.html">Index of all Mat routines</A>
<BR><A HREF="../../index.html">Table of Contents for all manual pages</A>
<BR><A HREF="../singleindex.html">Index of all manual pages</A>
</BODY></HTML>