File: MatCreateAIJCUSPARSE.html

package info (click to toggle)
petsc 3.7.5%2Bdfsg1-4
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 163,864 kB
  • ctags: 618,438
  • sloc: ansic: 515,133; python: 29,793; makefile: 20,458; fortran: 18,998; cpp: 6,515; f90: 3,914; sh: 1,012; xml: 621; objc: 445; csh: 240; java: 13
file content (70 lines) | stat: -rw-r--r-- 4,976 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD> <link rel="canonical" href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateAIJCUSPARSE.html" />
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>MatCreateAIJCUSPARSE</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
   <div id="version" align=right><b>petsc-3.7.5 2017-01-01</b></div>
   <div id="bugreport" align=right><a href="mailto:petsc-maint@mcs.anl.gov?subject=Typo or Error in Documentation &body=Please describe the typo or error in the documentation: petsc-3.7.5 v3.7.5 docs/manualpages/Mat/MatCreateAIJCUSPARSE.html "><small>Report Typos and Errors</small></a></div>
<A NAME="MatCreateAIJCUSPARSE"><H1>MatCreateAIJCUSPARSE</H1></A>
Creates a sparse matrix in AIJ (compressed row) format (the default parallel PETSc format).  This matrix will ultimately pushed down to NVidia GPUs and use the CUSPARSE library for calculations. For good matrix assembly performance the user should preallocate the matrix storage by setting the parameter nz (or the array nnz).  By setting these parameters accurately, performance during matrix assembly can be increased by more than a factor of 50. 
<H3><FONT COLOR="#CC3333">Synopsis</FONT></H3>
<PRE>
#include "petscmat.h" 
#undef __FUNCT__
#define __FUNCT__ "MatCreateAIJCUSPARSE"
PetscErrorCode  MatCreateAIJCUSPARSE(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt o_nnz[],Mat *A)
</PRE>
Collective on <A HREF="../Sys/MPI_Comm.html#MPI_Comm">MPI_Comm</A>
<P>
<H3><FONT COLOR="#CC3333">Input Parameters</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>comm </B></TD><TD>- MPI communicator, set to <A HREF="../Sys/PETSC_COMM_SELF.html#PETSC_COMM_SELF">PETSC_COMM_SELF</A>
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>m </B></TD><TD>- number of rows
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>n </B></TD><TD>- number of columns
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>nz </B></TD><TD>- number of nonzeros per row (same for all rows)
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>nnz </B></TD><TD>- array containing the number of nonzeros in the various rows
(possibly different for each row) or NULL
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Output Parameter</FONT></H3>
<DT><B>A </B> -the matrix
<br>
<P>
It is recommended that one use the <A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(), <A HREF="../Mat/MatSetType.html#MatSetType">MatSetType</A>() and/or <A HREF="../Mat/MatSetFromOptions.html#MatSetFromOptions">MatSetFromOptions</A>(),
MatXXXXSetPreallocation() paradigm instead of this routine directly.
[MatXXXXSetPreallocation() is, for example, <A HREF="../Mat/MatSeqAIJSetPreallocation.html#MatSeqAIJSetPreallocation">MatSeqAIJSetPreallocation</A>]
<P>
<H3><FONT COLOR="#CC3333">Notes</FONT></H3>
If nnz is given then nz is ignored
<P>
The AIJ format (also called the Yale sparse matrix format or
compressed row storage), is fully compatible with standard Fortran 77
storage.  That is, the stored row and column indices can begin at
either one (as in Fortran) or zero.  See the users' manual for details.
<P>
Specify the preallocated storage with either nz or nnz (not both).
Set nz=<A HREF="../Sys/PETSC_DEFAULT.html#PETSC_DEFAULT">PETSC_DEFAULT</A> and nnz=NULL for PETSc to control dynamic memory
allocation.  For large problems you MUST preallocate memory or you
will get TERRIBLE performance, see the users' manual chapter on matrices.
<P>
By default, this format uses inodes (identical nodes) when possible, to
improve numerical efficiency of matrix-vector products and solves. We
search for consecutive rows with the same nonzero structure, thereby
reusing matrix information to achieve increased efficiency.
<P>

<P>
<H3><FONT COLOR="#CC3333">See Also</FONT></H3>
 <A HREF="../Mat/MatCreate.html#MatCreate">MatCreate</A>(), <A HREF="../Mat/MatCreateAIJ.html#MatCreateAIJ">MatCreateAIJ</A>(), <A HREF="../Mat/MatSetValues.html#MatSetValues">MatSetValues</A>(), <A HREF="../Mat/MatSeqAIJSetColumnIndices.html#MatSeqAIJSetColumnIndices">MatSeqAIJSetColumnIndices</A>(), <A HREF="../Mat/MatCreateSeqAIJWithArrays.html#MatCreateSeqAIJWithArrays">MatCreateSeqAIJWithArrays</A>(), <A HREF="../Mat/MatCreateAIJ.html#MatCreateAIJ">MatCreateAIJ</A>(), MATMPIAIJCUSPARSE, <A HREF="../Mat/MATAIJCUSPARSE.html#MATAIJCUSPARSE">MATAIJCUSPARSE</A>
<BR><P><B><P><B><FONT COLOR="#CC3333">Level:</FONT></B>intermediate
<BR><FONT COLOR="#CC3333">Location:</FONT></B><A HREF="../../../src/mat/impls/aij/mpi/mpicusparse/mpiaijcusparse.cu#MatCreateAIJCUSPARSE">src/mat/impls/aij/mpi/mpicusparse/mpiaijcusparse.cu</A>
<BR><A HREF="./index.html">Index of all Mat routines</A>
<BR><A HREF="../../index.html">Table of Contents for all manual pages</A>
<BR><A HREF="../singleindex.html">Index of all manual pages</A>
</BODY></HTML>