1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD> <link rel="canonical" href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetMultiProcBlock.html" />
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>MatGetMultiProcBlock</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
<div id="version" align=right><b>petsc-3.14.5 2021-03-03</b></div>
<div id="bugreport" align=right><a href="mailto:petsc-maint@mcs.anl.gov?subject=Typo or Error in Documentation &body=Please describe the typo or error in the documentation: petsc-3.14.5 v3.14.5 docs/manualpages/Mat/MatGetMultiProcBlock.html "><small>Report Typos and Errors</small></a></div>
<A NAME="MatGetMultiProcBlock"><H1>MatGetMultiProcBlock</H1></A>
Create multiple [bjacobi] 'parallel submatrices' from a given 'mat' object. Each submatrix can span multiple procs.
<H3><FONT COLOR="#CC3333">Synopsis</FONT></H3>
<PRE>
#include "petscmat.h"
<A HREF="../Sys/PetscErrorCode.html#PetscErrorCode">PetscErrorCode</A> <A HREF="../Mat/MatGetMultiProcBlock.html#MatGetMultiProcBlock">MatGetMultiProcBlock</A>(<A HREF="../Mat/Mat.html#Mat">Mat</A> mat, <A HREF="../Sys/MPI_Comm.html#MPI_Comm">MPI_Comm</A> subComm, <A HREF="../Mat/MatReuse.html#MatReuse">MatReuse</A> scall,<A HREF="../Mat/Mat.html#Mat">Mat</A> *subMat)
</PRE>
Collective on <A HREF="../Mat/Mat.html#Mat">Mat</A>
<P>
<H3><FONT COLOR="#CC3333">Input Parameters</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>mat </B></TD><TD>- the matrix
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>subcomm </B></TD><TD>- the subcommunicator obtained by com_split(comm)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>scall </B></TD><TD>- either <A HREF="../Mat/MatReuse.html#MatReuse">MAT_INITIAL_MATRIX</A> or <A HREF="../Mat/MatReuse.html#MatReuse">MAT_REUSE_MATRIX</A>
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Output Parameter</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>subMat </B></TD><TD>- 'parallel submatrices each spans a given subcomm
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Notes</FONT></H3>
The submatrix partition across processors is dictated by 'subComm' a
communicator obtained by com_split(comm). The comm_split
is not restriced to be grouped with consecutive original ranks.
<P>
Due the comm_split() usage, the parallel layout of the submatrices
map directly to the layout of the original matrix [wrt the local
row,col partitioning]. So the original 'DiagonalMat' naturally maps
into the 'DiagonalMat' of the subMat, hence it is used directly from
the subMat. However the offDiagMat looses some columns - and this is
reconstructed with <A HREF="../Mat/MatSetValues.html#MatSetValues">MatSetValues</A>()
<P>
<P>
<P>
<H3><FONT COLOR="#CC3333">See Also</FONT></H3>
<A HREF="../Mat/MatCreateSubMatrices.html#MatCreateSubMatrices">MatCreateSubMatrices</A>()
<BR><P><B></B><H3><FONT COLOR="#CC3333">Level</FONT></H3>advanced<BR>
<H3><FONT COLOR="#CC3333">Location</FONT></H3>
</B><A HREF="../../../src/mat/interface/matrix.c.html#MatGetMultiProcBlock">src/mat/interface/matrix.c</A>
<P><H3><FONT COLOR="CC3333">Implementations</FONT></H3><A HREF="../../../src/mat/impls/aij/mpi/mpb_aij.c.html#MatGetMultiProcBlock_MPIAIJ">MatGetMultiProcBlock_MPIAIJ in src/mat/impls/aij/mpi/mpb_aij.c</A><BR>
<A HREF="../../../src/mat/impls/aij/seq/aij.c.html#MatGetMultiProcBlock_SeqAIJ">MatGetMultiProcBlock_SeqAIJ in src/mat/impls/aij/seq/aij.c</A><BR>
<A HREF="../../../src/mat/impls/baij/mpi/mpb_baij.c.html#MatGetMultiProcBlock_MPIBAIJ">MatGetMultiProcBlock_MPIBAIJ in src/mat/impls/baij/mpi/mpb_baij.c</A><BR>
<BR><A HREF="./index.html">Index of all Mat routines</A>
<BR><A HREF="../../index.html">Table of Contents for all manual pages</A>
<BR><A HREF="../singleindex.html">Index of all manual pages</A>
</BODY></HTML>
|