1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD> <link rel="canonical" href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJSELL.html" />
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>MatCreateMPIAIJSELL</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
<div id="version" align=right><b>petsc-3.10.3 2018-12-18</b></div>
<div id="bugreport" align=right><a href="mailto:petsc-maint@mcs.anl.gov?subject=Typo or Error in Documentation &body=Please describe the typo or error in the documentation: petsc-3.10.3 v3.10.3 docs/manualpages/Mat/MatCreateMPIAIJSELL.html "><small>Report Typos and Errors</small></a></div>
<A NAME="MatCreateMPIAIJSELL"><H1>MatCreateMPIAIJSELL</H1></A>
Creates a sparse parallel matrix whose local portions are stored as SEQAIJSELL matrices (a matrix class that inherits from SEQAIJ but performs some operations in SELL format). The same guidelines that apply to MPIAIJ matrices for preallocating the matrix storage apply here as well.
<H3><FONT COLOR="#CC3333">Synopsis</FONT></H3>
<PRE>
<A HREF="../Sys/PetscErrorCode.html#PetscErrorCode">PetscErrorCode</A> <A HREF="../Mat/MatCreateMPIAIJSELL.html#MatCreateMPIAIJSELL">MatCreateMPIAIJSELL</A>(<A HREF="../Sys/MPI_Comm.html#MPI_Comm">MPI_Comm</A> comm,<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> m,<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> n,<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> M,<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> N,<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> d_nz,const <A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> d_nnz[],<A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> o_nz,const <A HREF="../Sys/PetscInt.html#PetscInt">PetscInt</A> o_nnz[],<A HREF="../Mat/Mat.html#Mat">Mat</A> *A)
</PRE>
Collective on <A HREF="../Sys/MPI_Comm.html#MPI_Comm">MPI_Comm</A>
<P>
<H3><FONT COLOR="#CC3333">Input Parameters</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>comm </B></TD><TD>- MPI communicator
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>m </B></TD><TD>- number of local rows (or <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> to have calculated if M is given)
This value should be the same as the local size used in creating the
y vector for the matrix-vector product y = Ax.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>n </B></TD><TD>- This value should be the same as the local size used in creating the
x vector for the matrix-vector product y = Ax. (or <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> to have
calculated if N is given) For square matrices n is almost always m.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>M </B></TD><TD>- number of global rows (or <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> to have calculated if m is given)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>N </B></TD><TD>- number of global columns (or <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> to have calculated if n is given)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>d_nz </B></TD><TD>- number of nonzeros per row in DIAGONAL portion of local submatrix
(same value is used for all local rows)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>d_nnz </B></TD><TD>- array containing the number of nonzeros in the various rows of the
DIAGONAL portion of the local submatrix (possibly different for each row)
or NULL, if d_nz is used to specify the nonzero structure.
The size of this array is equal to the number of local rows, i.e 'm'.
For matrices you plan to factor you must leave room for the diagonal entry and
put in the entry even if it is zero.
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>o_nz </B></TD><TD>- number of nonzeros per row in the OFF-DIAGONAL portion of local
submatrix (same value is used for all local rows).
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>o_nnz </B></TD><TD>- array containing the number of nonzeros in the various rows of the
OFF-DIAGONAL portion of the local submatrix (possibly different for
each row) or NULL, if o_nz is used to specify the nonzero
structure. The size of this array is equal to the number
of local rows, i.e 'm'.
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Output Parameter</FONT></H3>
<DT><B>A </B> -the matrix
<br>
<P>
<H3><FONT COLOR="#CC3333">Notes</FONT></H3>
If the *_nnz parameter is given then the *_nz parameter is ignored
<P>
m,n,M,N parameters specify the size of the matrix, and its partitioning across
processors, while d_nz,d_nnz,o_nz,o_nnz parameters specify the approximate
storage requirements for this matrix.
<P>
If <A HREF="../Sys/PETSC_DECIDE.html#PETSC_DECIDE">PETSC_DECIDE</A> or <A HREF="../Sys/PETSC_DETERMINE.html#PETSC_DETERMINE">PETSC_DETERMINE</A> is used for a particular argument on one
processor than it must be used on all processors that share the object for
that argument.
<P>
The user MUST specify either the local or global matrix dimensions
(possibly both).
<P>
The parallel matrix is partitioned such that the first m0 rows belong to
process 0, the next m1 rows belong to process 1, the next m2 rows belong
to process 2 etc.. where m0,m1,m2... are the input parameter 'm'.
<P>
The DIAGONAL portion of the local submatrix of a processor can be defined
as the submatrix which is obtained by extraction the part corresponding
to the rows r1-r2 and columns r1-r2 of the global matrix, where r1 is the
first row that belongs to the processor, and r2 is the last row belonging
to the this processor. This is a square mxm matrix. The remaining portion
of the local submatrix (mxN) constitute the OFF-DIAGONAL portion.
<P>
If o_nnz, d_nnz are specified, then o_nz, and d_nz are ignored.
<P>
When calling this routine with a single process communicator, a matrix of
type SEQAIJSELL is returned. If a matrix of type MPIAIJSELL is desired
<H3><FONT COLOR="#CC3333">for this type of communicator, use the construction mechanism</FONT></H3>
<A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(...,&A); <A HREF="../Mat/MatSetType.html#MatSetType">MatSetType</A>(A,MPIAIJSELL); <A HREF="../Mat/MatMPIAIJSetPreallocation.html#MatMPIAIJSetPreallocation">MatMPIAIJSetPreallocation</A>(A,...);
<P>
<H3><FONT COLOR="#CC3333">Options Database Keys</FONT></H3>
<DT><B>-mat_aijsell_eager_shadow </B> -Construct shadow matrix upon matrix assembly; default is to take a "lazy" approach, performing this step the first time the matrix is applied
<br>
<P>
<P>
<H3><FONT COLOR="#CC3333">Keywords</FONT></H3>
matrix, sparse, parallel
<BR>
<P>
<H3><FONT COLOR="#CC3333">See Also</FONT></H3>
<A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(), <A HREF="../Mat/MatCreateSeqAIJSELL.html#MatCreateSeqAIJSELL">MatCreateSeqAIJSELL</A>(), <A HREF="../Mat/MatSetValues.html#MatSetValues">MatSetValues</A>()
<BR><P><B></B><H3><FONT COLOR="#CC3333">Level</FONT></H3>intermediate<BR>
<H3><FONT COLOR="#CC3333">Location</FONT></H3>
</B><A HREF="../../../src/mat/impls/aij/mpi/aijsell/mpiaijsell.c.html#MatCreateMPIAIJSELL">src/mat/impls/aij/mpi/aijsell/mpiaijsell.c</A>
<BR><A HREF="./index.html">Index of all Mat routines</A>
<BR><A HREF="../../index.html">Table of Contents for all manual pages</A>
<BR><A HREF="../singleindex.html">Index of all manual pages</A>
</BODY></HTML>
|