File: faq.html

package info (click to toggle)
scalapack-doc 1.5-11
  • links: PTS
  • area: main
  • in suites: bullseye, buster, stretch
  • size: 10,336 kB
  • ctags: 4,931
  • sloc: makefile: 47; sh: 18
file content (903 lines) | stat: -rw-r--r-- 29,878 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
<html>
<head>
<title>ScaLAPACK FAQ</title>
</head>
<body>
<center>
<h1> ScaLAPACK Frequently Asked Questions (FAQ) </h1>
<h2>
<i>
<a href="http://www.netlib.org/scalapack/html/scalapack_contributors.html">
scalapack@cs.utk.edu</a>
</i>
</h2>
</center>
<p>
<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<p>
<i>
Many thanks to the <a href="http://www.netlib.org/utk/icl/maintainers.html">
netlib_maintainers@netlib.org</a> from whose FAQ list I have patterned
this list for ScaLAPACK.
</i>
</p>

<p>
<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<h2>
Table of Contents
</h2>

<dl>
<dd>ScaLAPACK
   <dl>
    <a href="#1.1"> <dd>1.1)  What is ScaLAPACK?</a>
    <a href="#1.2"> <dd>1.2)  How do I reference ScaLAPACK in a scientific publication?</a>
    <a href="#1.3"> <dd>1.3)  Are there vendor-specific versions of ScaLAPACK?</a>
    <a href="#1.4"> <dd>1.4)  What is the difference between the vendor and
                              Netlib version of ScaLAPACK and which should
                              I use?</a>
    <a href="#1.5"> <dd>1.5)  Are there legal restrictions on the use of ScaLAPACK software?</a>
    <a href="#1.6"> <dd>1.6)  What is two-dimensional block cyclic data distribution?</a>
    <a href="#1.7"> <dd>1.7)  Where can I find out more information about ScaLAPACK?</a>
    <a href="#1.8"> <dd>1.8)  What and where are the PBLAS?</a>
    <a href="#1.9"> <dd>1.9)  Are example programs available?</a>
    <a href="#1.10"> <dd>1.10)  How do I run an example program?</a>
    <a href="#1.11"> <dd>1.11)  How do I install ScaLAPACK?</a>
    <a href="#1.12"> <dd>1.12)  How do I install ScaLAPACK using MPIch-G and Globus?</a>
    <a href="#1.13"> <dd>1.13) How do I achieve high performance using ScaLAPACK?</a>
    <a href="#1.14"> <dd>1.14)  Are prebuilt ScaLAPACK libraries available?</a>
    <a href="#1.15"> <dd>1.15)  How do I find a particular routine?</a>
    <a href="#1.16"> <dd>1.16)  I can't get a program to work.  What should I do?</a>
    <a href="#1.17"> <dd>1.17)  How can I unpack scalapack.tgz?</a>
    <a href="#1.18"> <dd>1.18) What technical support for ScaLAPACK is available?</a>
    <a href="#1.19"> <dd>1.19) How do I submit a bug report?</a>
    <a href="#1.20"> <dd>1.20) How do I gather a distributed vector back to one processor?</a>
   </dl>
<p>
<dd>BLACS
   <dl>
    <a href="#2.1"> <dd>2.1)  What and where are the BLACS?</a>
    <a href="#2.2"> <dd>2.2)  Is there a Quick Reference Guide to the BLACS available?</a>
    <a href="#2.3"> <dd>2.3)  How do I install the BLACS?</a>
    <a href="#2.4"> <dd>2.4)  Are prebuilt BLACS libraries available?</a>
    <a href="#2.5"> <dd>2.5)  Are example BLACS programs available?</a>
   </dl>
<p>
<dd>BLAS
   <dl>
    <a href="#3.1"> <dd>3.1)  What are the BLAS?</a>
    <a href="#3.2"> <dd>3.2)  Publications/references for the BLAS?</a>
    <a href="#3.3"> <dd>3.3)  Is there a Quick Reference Guide to the BLAS avail
able?</a>
    <a href="#3.4"> <dd>3.4)  Are optimized BLAS libraries available?</a>
    <a href="#3.5"> <dd>3.5)  What is ATLAS?</a>
    <a href="#3.6"> <dd>3.6)  Where can I find vendor supplied BLAS?</a>
    <a href="#3.7"> <dd>3.7)  Where can I find the Intel BLAS for Linux?</a>
    <a href="#3.8"> <dd>3.8)  Where can I find Java BLAS?</a>
    <a href="#3.9"> <dd>3.9)  Are prebuilt Fortran77 ref implementation BLAS lib
raries available from Netlib?</a>
   </dl>
</dl>

<p>
<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<h2>
1)  ScaLAPACK 
</h2>

<p>
<strong>
<a name="1.1">
    1.1) What is ScaLAPACK?  <br>
</a>
</strong>

<p>
The <B>ScaLAPACK</B> (or Scalable LAPACK) library includes a subset of
<a href="http://www.netlib.org/lapack/"><B>LAPACK</B></a> routines
redesigned for distributed memory MIMD parallel computers.  It is
currently written in a Single-Program-Multiple-Data style using explicit
message passing for interprocessor communication.  It assumes matrices
are laid out in a <a href="http://www.netlib.org/scalapack/html/fig/2dbsd.ps">two-dimensional block cyclic
decomposition</a>.
<p>
Like <B>LAPACK</B>, the <B>ScaLAPACK</B> routines are based on
block-partitioned algorithms
in order to minimize the frequency of data movement between different
levels of the memory hierarchy.
(For such machines, the memory hierarchy includes the
off-processor memory of other processors, in addition to the hierarchy of
registers, cache, and local memory on each processor.)
The fundamental building blocks
of the <B>ScaLAPACK</B> library are distributed memory versions
(<a href="http://www.netlib.org/scalapack/html/pblas_qref.html">PBLAS</a>)
of the <a href="http://www.netlib.org/blas/">Level 1, 2 and 3 <B>BLAS</B></a>, and a set of Basic Linear Algebra Communication Subprograms
(<a href="http://www.netlib.org/blacs/">BLACS</a>) for
communication tasks that arise frequently in parallel linear algebra
computations.
In the <B>ScaLAPACK</B> routines, all interprocessor communication occurs
within the <B>PBLAS</B> and the <B>BLACS</B>.
One of the design goals of <B>ScaLAPACK</B> was to have the ScaLAPACK
routines resemble their <B>LAPACK</B> equivalents as much as possible.</p>
<p>
For detailed information on ScaLAPACK, please refer to the <a href="http://www.netlib.org/scalapack/slug/scalapack_slug.html">ScaLAPACK Users' Guide</a>.</p>

<hr>
<p>
<strong>
    <a name="1.2"> 
    1.2) How do I reference ScaLAPACK in a scientific publication?<br>
</a>
</strong>
<p>
We ask that you cite the ScaLAPACK Users' Guide.
</p>
<pre>
@BOOK{slug,
      AUTHOR = {Blackford, L. S. and Choi, J. and Cleary, A. and
                D'Azevedo, E. and Demmel, J. and Dhillon, I. and
                Dongarra, J. and Hammarling, S. and Henry, G. and
                Petitet, A. and Stanley, K. and Walker, D. and
                Whaley, R. C.},
      TITLE = {{ScaLAPACK} Users' Guide},
      PUBLISHER = {Society for Industrial and Applied Mathematics},
      YEAR = {1997},
      ADDRESS = {Philadelphia, PA},
      ISBN = {0-89871-397-8 (paperback)} }
</pre>

<hr>
<p>
<strong>
<a name="1.3">
    1.3) Are there vendor-specific versions of ScaLAPACK?<br>
</a>
</strong>

<p>
Yes.</p>
<p> 
ScaLAPACK has been incorporated into several commercial packages,
including the
<a href="http://www.sun.com/solutions/hpc/docs/30/">Sun Scalable Scientific Subroutine Library (Sun S3L)</a>,
<a href="http://www.nag.co.uk:80/numeric/FM.html">NAG Parallel Library</a>,
<a href="http://www.rs6000.ibm.com/software/sp_products/esslpara.html">IBM Parallel ESSL</a>,
and <a href="http://www.cray.com/PUBLIC/product-info/sw/PE/LibSci.html">Cray LIBSCI</a>,
and is being integrated into the <a href="http://www.vni.com/products/imsl/">VNI IMSL Numerical Library</a>, as well as software libraries for
Fujitsu, Hewlett-Packard/Convex, Hitachi, NEC, and <a href="http://www.sgi.com/Products/hardware/Power/ch\_complib.html">SGI</a>.</p>
<hr>
<p>
<strong>
<a name="1.4">
    1.4) What is the difference between the vendor and
         Netlib version of ScaLAPACK and which should I use?<br>
</a>
</strong>

<p>
The publically available version of ScaLAPACK (on netlib) is designed
to be portable and efficient across a wide range of computers.
It is not hand-tuned for a specific computer architecture.</p>
<p>
The vendor-specific versions of ScaLAPACK have been optimized for a
specific architecture.  Therefore, for best performance, we recommend
using a vendor-optimized version of ScaLAPACK if it is available.</p>
<p>
However, as new ScaLAPACK routines are introduced with each release,
the vendor-specific versions of ScaLAPACK may only contain
a subset of the existing routines.</p>

<p>
If a user suspects an error in a vendor-specific ScaLAPACK routine,
he is recommended to download the ScaLAPACK Test Suite from netlib.
</p>
<hr>
<p>
<strong>
<a name="1.5">
    1.5)  Are there legal restrictions on the use of ScaLAPACK software?<br>
</a>
</strong>

<p>
ScaLAPACK (like LINPACK, EISPACK, LAPACK, etc) is a freely-available
software package.  It is available from netlib via anonymous ftp and
the World Wide Web.  It can, and is, being included in commercial
packages (e.g., Sun's S3L, IBM's Parallel ESSL, NAG Numerical PVM and 
MPI Library).
We only ask that proper credit be given to the authors.</p>
<p> 
Like all software, it is copyrighted.  It is not trademarked, but we do
ask the following:</p>
<p> 
If you modify the source for these routines
we ask that you change the name of the routine and comment
the changes made to the original.</p>
<p> 
We will gladly answer any questions regarding the software.
If a modification is done, however, it is the responsibility of the
person who modified the routine to provide support.</p>

<hr>
<p>
<strong>
<a name="1.6">
    1.6)  What is two-dimensional block cyclic data distribution? <br>
</a>
</strong>

<a href="http://www.netlib.org/scalapack/html/fig/2dbsd.ps">two-dimensional block cyclic decomposition</a>

<hr>
<p>
<strong>
<a name="1.7">
    1.7)  Where can I find more information about ScaLAPACK?<br>
</a>
</strong>

<p>
A variety of working notes related to the ScaLAPACK library
were published as LAPACK Working Notes and are available in
postscript and pdf format at:
<dl>
<dd><a href="http://www.netlib.org/lapack/lawns/">
http://www.netlib.org/lapack/lawns/</a> and
<dd><a href="http://www.netlib.org/lapack/lawnspdf/">
http://www.netlib.org/lapack/lawnspdf/</a>
</dl>

<hr>
<p>
<strong>
<a name="1.8">
    1.8) What and where are the PBLAS?<br>
</a>
</strong>

<p>
The <B>Parallel Basic Linear Algebra Subprograms (PBLAS)</B> are distributed
memory versions of the <a href="http://www.netlib.org/blas/">Level 1, 2 and 3 <B>BLAS</B></a>. A
<a href="http://www.netlib.org/scalapack/html/pblas_qref.html">Quick
Reference Guide to the PBLAS</a> is available.
The software is available as
part of the ScaLAPACK distribution tar file (scalapack.tgz).
<p>
There is also a new prototype version of the <a href="http://www.netlib.org/scalapack/prototype/pblasV2.tgz">PBLAS (version 2.0)</a>,
which is alignment-restriction free and uses logical algorithmic blocking
techniques.  For details, please refer to the <a href="http://www.netlib.org/scalapack/prototype/readme.pblas">scalapack/prototype/readme.pblas</a>.</p>  

<hr>
<p>
<strong>
<a name="1.9">
    1.9)  Are example ScaLAPACK programs available?<br>
</a>
</strong>

<p>
Yes, example ScaLAPACK programs are available.  Refer to
<dl>
<dd><a href="http://www.netlib.org/scalapack/examples/">
<address>http://www.netlib.org/scalapack/examples/</address></a>
</dl>
for a list of available example programs.

<p>
A detailed description of how to run a ScaLAPACK example program
is discussed in Chapter 2 of the <a href="http://www.netlib.org/scalapack/slug/scalapack_slug.html">ScaLAPACK Users' Guide</a>.</p>

<hr>
<p>
<strong>
<a name="1.10">
    1.10)  How do I run an example program?<br>
</a>
</strong>

<p>
A detailed description of how to run a ScaLAPACK example program
is discussed in Chapter 2 of the <a href="http://www.netlib.org/scalapack/slug/scalapack_slug.html">ScaLAPACK Users' Guide</a>.</p>

<hr>
<p>
<strong>
<a name="1.11">
    1.11)  How do I install ScaLAPACK?<br>
</a>
</strong>

<p>
A comprehensive <a href="http://www.netlib.org/scalapack/scalapack_install.ps">Installation Guide</a> for ScaLAPACK is provided.  In short, a user only
needs to modify one file, <b>SLmake.inc</b>, to specify his compiler, compiler
flags, location of his MPI library, BLACS library, and BLAS library. 
And then type <b>make lib</b> to build the ScaLAPACK library, and 
<b>make exe</b> to build the testing/timing executables.  Example <b>SLmake.inc</b>
files for various architectures are supplied in the <b>SCALAPACK/INSTALL</b>
subdirectory in the distribution.</p>

<p>
When you install ScaLAPACK, the installation assumes that the user
has available a low-level message passing layer (like MPI, PVM, or
a native message-passing library), a BLACS library (MPIBLACS or PVMBLACS,
etc), and a BLAS library.  If any of these required components is not 
available, then the user must build the needed component before
proceeding with the ScaLAPACK installation.</p>
<p>
     <ul>
     <li> <a href="http://www.netlib.org/mpi/">MPI</a>
     <li> <a href="http://www.netlib.org/pvm3/">PVM</a>
     <li> <a href="http://www.netlib.org/blacs/">BLACS</a>
     <li> <a href="http://www.netlib.org/atlas/">ATLAS</a> or
          <a href="http://www.netlib.org/blas/">BLAS</a>.
     </ul>

If a vendor-optimized BLAS library is not available, ATLAS can be
used to automatically generate an optimized BLAS library for your
architecture.  Only as a last resort should the user use the reference 
implementation Fortran77 BLAS contained on the <a href="http://www.netlib.org/blas/">BLAS webpage</a>.
</p>
<hr>
<p>
<strong>
<a name="1.12">
    1.12)  How do I install ScaLAPACK using MPIch-G and Globus?<br>
</a>
</strong>

<p>
A detailed explanation of how to run a ScaLAPACK program using MPIch-G
and Globus can be found at: <a href="http://www.cs.utk.edu/~petitet/grads/">http://www.cs.utk.edu/~petitet/grads/</a>.
</p>

<p>
See <a href="#1.11">Question 1.11</a> for general installation instructions.
</p>

<hr>

<p>
<strong>
<a name="1.13">
    1.13)  How do I achieve high performance using ScaLAPACK?<br>
</a>
</strong>

<p>
ScaLAPACK performance relies on an efficient low-level message-passing
layer and high speed interconnection network for communication,
and an optimized BLAS library for local computation.
</p>
<p>
For a detailed description of performance-related issues, please refer to
Chapter 5 of the <a href="http://www.netlib.org/scalapack/slug/scalapack_slug.html">ScaLAPACK Users' Guide</a>.
</p>

<hr>
<p>
<strong>
<a name="1.14">
    1.14)  Are prebuilt ScaLAPACK libraries available?<br>
</a>
</strong>

<p>
Yes, prebuilt ScaLAPACK libraries are available for a variety
of architectures.  Refer to
<dl>
<dd><a href="http://www.netlib.org/scalapack/archives/">
<address>http://www.netlib.org/scalapack/archives/</address></a>
</dl>
for a complete list of available prebuilt libraries.
</p>

<hr>
<p>
<strong>
<a name="1.15">
    1.15)  How do I find a particular routine?<br>
</a>
</strong>

<p>
Indexes of individual ScaLAPACK driver and computational routines
are available.  These indexes contain brief descriptions of each
routine.

<p>
<B>ScaLAPACK</B> routines are available in four types:  <B>single precision
real</B>, <B>double precision real</B>, <B>single precision complex</B>, and
<B>double precision complex</B>.  At the present time, the nonsymmetric
eigenproblem is only available in single and double precision real.</P>
<UL>
<LI><a href="http://www.netlib.org/scalapack/html/scalapack_single.html">Index of ScaLAPACK Single Precision REAL Routines</a>.
<LI><a href="http://www.netlib.org/scalapack/html/scalapack_double.html">Index of ScaLAPACK Double Precision REAL Routines</a>.
<LI><a href="http://www.netlib.org/scalapack/html/scalapack_complex.html">Index of ScaLAPACK Single Precision COMPLEX Routines</a>.
<LI><a href="http://www.netlib.org/scalapack/html/scalapack_complex16.html">Index of ScaLAPACK Double Precision COMPLEX Routines</a>.
</UL>

<hr>

<p>
<strong>
<a name="1.16">
    1.16) I can't get a program to work.  What should I do?<br>
</a>
</strong>

<p>
Technical questions should be directed to the authors at
<a href="mailto:scalapack@cs.utk.edu"><address>scalapack@cs.utk.edu.</address></a></p>
<p>
Please tell us the type of machine on
which the tests were run, the compiler and compiler options that
were used, details of the BLACS library that was used, as well
as the BLAS library, and a copy of the input file if appropriate.</p>
<p>
Be prepared to answer the following questions:</p>
<ol>
<li> Have you run the BLAS, BLACS, PBLAS and ScaLAPACK test suites?
<li> Have you checked the appropriate errata lists on netlib?
     <ul>
     <li> <a href="http://www.netlib.org/scalapack/errata.scalapack">errata.scalapack</a>
     <li> <a href="http://www.netlib.org/blacs/errata.blacs">errata.blacs</a>
     </ul>
<li> Have you attempted to replicate this error using the appropriate
     ScaLAPACK test code and/or one of the ScaLAPACK example routines?
<li> If you are using an optimized BLAS or BLACS library, have you tried
     using the reference implementations from netlib?
</ol>
</p>

<hr>
<p>
<strong>
<a name="1.17">
    1.17) How can I unpack scalapack.tgz?<br>
</a>
</strong>

<p>
<pre>
   gunzip scalapack.tgz
   tar xvf scalapack.tar
</pre>

<p>
The compression program <i>gzip (and gunzip)</i> is Gnu software.  If
it is not already available on your machine, you can download it
via <i>anonymous ftp</i>:</p>
<pre>
   ncftp prep.ai.mit.edu
   cd pub/gnu/
   get gzip-1.2.4.tar
</pre>

<p>
See <a href="#1.11">Question 1.11</a> for installation instructions.
</p>

<hr>
<p>
<strong>
<a name="1.18">
    1.18) What technical support for ScaLAPACK is available?<br>
</a>
</strong>

<p>
Technical questions and comments should be directed to the authors at
<a href="mailto:scalapack@cs.utk.edu"><address>scalapack@cs.utk.edu.</address></a>
<p>
See <a href="#1.16">Question 1.16</a>

<hr>
<p>
<strong>
<a name="1.19">
    1.19) How do I submit a bug report?<br>
</a>
</strong>

<p>
Technical questions should be directed to the authors at
<a href="mailto:scalapack@cs.utk.edu"><address>scalapack@cs.utk.edu.</address></a></p>

<p>
Be prepared to answer the questions as outlined in <a href="#1.15">Question 1.15</a>.  Those are the first questions that we will ask!
<p>

<hr>
<p>
<strong>
<a name="1.20">
    1.20) How do I gather a distributed vector back to one processor?<br>
</a>
</strong>

<p>
There are several ways to accomplish this task.
</p>
<ol>
<li>You can create a local array of the global size and each process
    will write his pieces of the matrix in the appropriate locations,
    and then you can do a call to the BLACS routine DGSUM2D to add
    all of them together and then leave the answer on one process
    or on all processes.
<li>You can modify SCALAPACK/TOOLS/pdlaprnt.f to write to an array
    instead of writing to a file.

<li>You can modify the routine pdlawrite.f from the example program
    <a href="http://www.netlib.org/scalapack/examples/scaex.tgz">http://www.netlib.org/scalapack/examples/scaex.tgz</a>.

<li>You can create a second "context" containing only one process,
    and then call the redistribution routines in SCALAPACK/REDIST/SRC/
    to redistribute the matrix to that process grid.
</ol>

<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<h2>
2)  BLACS 
</h2>

<p>
<strong>
<a name="2.1">
    2.1) What and where are the BLACS?<br>
</a>
</strong>

<p>
The BLACS (Basic Linear Algebra Communication Subprograms) project is an
ongoing investigation whose purpose is to create a linear algebra
oriented message passing interface that may be implemented efficiently and
uniformly across a large range of distributed memory platforms.</p>
<p> 
The length of time required to implement efficient distributed memory
algorithms makes it impractical to rewrite programs for every new
parallel machine.  The BLACS exist in order to make linear algebra
applications both easier to program and more portable.  It is for
this reason that the BLACS are used as the communication layer of
<B>ScaLAPACK</B>.</p>
<p>
For further information on the BLACS, please refer to the blacs
directory on netlib, as well as the
<a href="http://www.netlib.org/blacs/">BLACS Homepage</a>.

<hr>
<p>
<strong>
<a name="2.2">
    2.2)  Is there a Quick Reference Guide to the BLACS available?<br>
</a>
</strong>
 
<p>
Yes, there is a postscript version of the <B>Quick Reference Guide to 
the BLACS</B> available.</p>
<ul>
<li><a href="http://www.netlib.org/blacs/f77blacsqref.ps">Fortran77 interface to the BLACS</a>.
<li><a href="http://www.netlib.org/blacs/cblacsqref.ps">C interface to the BLACS</a>.
</ul>

<hr>
<p>
<strong>
<a name="2.3">
    2.3)  How do I install the BLACS?<br>
</a>
</strong>
 
<p>
First, you must choose which underlying message-passing layer
that the BLACS will use (MPI, PVM, NX, MPL, etc).  Once this decision
has been made, you download the respective gzip tar file.
<ul>
<li><a href="http://www.netlib.org/blacs/mpiblacs.tgz">MPI BLACS</a>
<li><a href="http://www.netlib.org/blacs/pvmblacs.tgz">PVM BLACS</a>
</ul>

<p>
An <a href="http://www.netlib.org/blacs/blacs_install.ps">Installation Guide</a> for the BLACS is provided, as well as a
comprehensive <a href="http://www.netlib.org/blacs/blacstester.tgz">BLACS Test Suite</a>.  In short, a user only
needs to modify one file, <b>Bmake.inc</b>, to specify his compiler, compiler
flags, and location of his MPI library.
And then type <b>make mpi</b> to build the MPI BLACS library, for
example.  Example <b>Bmake.inc</b> files for various architectures are 
supplied in the <b>BLACS/BMAKES</b>
subdirectory in the distribution.  There are also scripts in <b>BLACS/INSTALL</b> which can be run to help the user to determine some of the settings in
the <b>Bmake.inc</b> file.</p>
<p>
It is highly recommended that the user run the <a href=http://www.netlib.org/blacs/blacstester.tgz">BLACS Tester</a> to ensure that his installation is
correct, and that no bugs have been detected in the low-level message-passing
layer.  If you suspect an error, please consult the 
<ul>
<li><a href="http://www.netlib.org/blacs/errata.blacs">errata.blacs</a>
</ul>
file on netlib.
</p>

<hr>
<p>
<strong>
<a name="2.4">
    2.4)  Are prebuilt BLACS libraries available?<br>
</a>
</strong>

<p>
Yes, prebuilt BLACS libraries are available for a variety
of architectures and message-passing interfaces.  Refer to
<dl>
<dd><a href="http://www.netlib.org/blacs/archives/">
<address>http://www.netlib.org/blacs/archives/</address></a>
</dl>
for a complete list of available prebuilt libraries.

<hr>
<p>
<strong>
<a name="2.5">
    2.5)  Are example BLACS programs available?<br>
</a>
</strong>

<p>
Yes, example BLACS programs are available.  Refer to
<dl>
<dd><a href="http://www.netlib.org/blacs/BLACS/Examples.html">
<address>http://www.netlib.org/scalapack/examples/</address></a>
</dl>
for a list of available example programs.

<p>
<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<h2>
3)  BLAS 
</h2>

<hr>
<p>
<strong>
<a name="3.1">
    3.1) What and where are the BLAS?<br>
</a>
</strong>

<p>
The BLAS (Basic Linear Algebra Subprograms) are high quality
"building block" routines for performing basic vector and matrix
operations.  Level 1 BLAS do vector-vector operations, Level 2
BLAS do matrix-vector operations, and Level 3 BLAS do
matrix-matrix operations.  Because the BLAS are efficient,
portable, and widely available, they're commonly used in the
development of high quality linear algebra software,
<a href="http://www.netlib.org/linpack/">LINPACK</a> and
<a href="http://www.netlib.org/lapack/">LAPACK</a> for example.  
<p>
A Fortran77 reference implementation of the BLAS is located in the
<a href="http://www.netlib.org/blas/">blas directory</a> of Netlib.

<hr>
<p>
<strong>
<a name="3.2">
    3.2) Publications/references for the BLAS?<br>
</a>
</strong>

<p>
<ol>
<li>
C. L. Lawson, R. J. Hanson, D. Kincaid, and F. T. Krogh, <i>Basic
Linear Algebra Subprograms for FORTRAN usage</i>, <a href="http://www.acm.org/toms/V5.html#v5n3">ACM Trans. Math.
Soft., 5 (1979)</a>, pp. 308--323.<p>

<li>
J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, <i>An
extended set of FORTRAN Basic Linear Algebra Subprograms</i>, <a href="http://www.acm.org/toms/V14.html">ACM Trans.
Math. Soft., 14 (1988)</a>, pp. 1--17.<p>

<li>
J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson,
<i>Algorithm 656: An extended set of FORTRAN Basic Linear Algebra
Subprograms</i>, <a href="http://www.acm.org/toms/V14.html">ACM Trans. Math. Soft., 14 (1988)</a>, pp. 18--32.<p>

<li>
J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, <i>A set of
Level 3 Basic Linear Algebra Subprograms</i>, <a href="http://www.acm.org/toms/V16.html">ACM Trans. Math. Soft.,
16 (1990)</a>, pp. 1--17.<p>

<li>
J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, <i>Algorithm
679: A set of Level 3 Basic Linear Algebra Subprograms</i>, <a href="http://www.acm.org/toms/V16.html">ACM Trans.
Math. Soft., 16 (1990)</a>, pp. 18--28.<p>
</ol>
 
<hr>
<p>
<strong>
<a name="3.3">
    3.3)  Is there a Quick Reference Guide to the BLAS available?<br>
</a>
</strong>
 
<p>
Yes, there is a postscript version of the <a href="http://www.netlib.org/blas/blasqr.ps">Quick Reference Guide to the BLAS</a> available.

<hr>
<p>
<strong>
<a name="3.4">
    3.4)  Are optimized BLAS libraries available?<br>
</a>
</strong>
 
<p>
YES!  Machine-specific optimized BLAS libraries are available for
a variety of computer architectures.  These optimized BLAS libraries
are provided by the computer vendor or by an independent software
vendor (ISV).  For further details, please contact your local vendor
representative.  </p>
<p>
Alternatively, the user can download <a href="http://www.netlib.org/atlas/">ATLAS</a> to automatically generate an optimized BLAS library for his
architecture.
</p>
<p>
If all else fails, the user can
download a <a href="http://www.netlib.org/blas/blas.tgz">Fortran77
reference implementation of the BLAS</a> from netlib.  However,
keep in mind that this is a reference implementation and is not 
optimized.</p>
<hr>
<p>
<strong>
<a name="3.5">
    3.5)  What is ATLAS?<br>
</a>
</strong>

<p>
ATLAS is an approach for the automatic generation and optimization of
numerical software for processors with deep memory hierarchies and pipelined
functional units. The production of such software for machines ranging from
desktop workstations to embedded processors can be a tedious and time
consuming task. ATLAS has been designed to automate much of this process.
We concentrate our efforts on the widely used linear algebra kernels
called the Basic Linear Algebra Subroutines (BLAS). </p>
<p>
For further information, refer to the <a href="http://www.netlib.org/atlas/">ATL
AS webpage</a>.</p>

<hr>
<p>
<strong>
<a name="3.6">
    3.6)  Where can I find vendor supplied BLAS?<br>
</a>
</strong>
<p>
BLAS Vendor List <BR>
Last updated: November 13, 1998 <BR>
<HR><TABLE BORDER="1" CELLPADDING="3">
  <TR><TD ALIGN=LEFT><BOLD><H3> Vendor </H3></TD>
  <TD ALIGN=LEFT><BOLD><H3> URL </H3></BOLD></TD></TR>


  <TR><TD ALIGN=LEFT> Cray </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.sgi.com/Products/appsdirectory.dir/Applications/Math_Physi
cs_Other_Sciences/ApplicationNumber379376.html">
  http://www.sgi.com/Products/appsdirectory.dir/Applications/Math_Physics_Other_
Sciences/
  </A></TD></TR>


  <TR><TD ALIGN=LEFT> DEC </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.digital.com/info/hpc/software/dxml.html">
  http://www.digital.com/info/hpc/software/dxml.html
  </A></TD></TR>

  <TR><TD ALIGN=LEFT> HP </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.hp.com/esy/systems_networking/tech_servers/products/librar
y.html">
  http://www.hp.com/esy/systems_networking/tech_servers/products/library.html
  </A></TD></TR>

  <TR><TD ALIGN=LEFT> IBM </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.rs6000.ibm.com/software/Apps/essl.html">
  http://www.rs6000.ibm.com/software/Apps/essl.html
  <BR>
  <A HREF="http://www.rs6000.ibm.com/software/sp_products/esslpara.html">
  http://www.rs6000.ibm.com/software/sp_products/esslpara.html
  </A></TD></TR>

  <TR><TD ALIGN=LEFT> Intel </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://developer.intel.com/vtune/perflibst/mkl/mklperf.htm">
  http://developer.intel.com/vtune/perflibst/mkl/mklperf.htm (NT)
  </A></TD></TR>

  <TR><TD ALIGN=LEFT> SGI </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.sgi.com/software/scsl.html">
  http://www.sgi.com/software/scsl.html
  </A></TD></TR>

  <TR><TD ALIGN=LEFT> SUN </TD>
  <TD ALIGN=LEFT>
  <A HREF="http://www.sun.com/workshop/fortran/
">
  http://www.sun.com/workshop/fortran/
  </A></TD></TR>

</TABLE>
</p>

<hr>
<p>
<strong>
<a name="3.7">
    3.7)  Where can I find the Intel BLAS for Linux?<br>
</a>
</strong>
<p>
Yes, the Intel BLAS for Linux are now available!  Refer to the following
URL:
<a href="http://www.cs.utk.edu/~ghenry/distrib/">Intel BLAS for Linux</a>.
</p>


<hr>
<p>
<strong>
<a name="3.8">
    3.8)  Where can I find Java BLAS?<br>
</a>
</strong>
<p>
Yes, Java BLAS are available.  Refer to the following
URLs:
<a href="http://www.cs.utk.edu/f2j/download.html/">Java LAPACK</a>
and
<a href="http://math.nist.gov/javanumerics/">JavaNumerics</a>
The <b>JavaNumerics</b> webpage provides a focal point for information
on numerical computing in Java.
</p>
 
<hr>
<p>
<strong>
<a name="3.9">
    3.9)  Are prebuilt Fortran77 ref implementation BLAS libraries available?<br>
</a>
</strong>
 
<p>
Yes.  However, it is assumed that you have a machine-specific optimized <b>BLAS</b>
library already available on the architecture to which you are installing
<b>LAPACK</b>.  If this is not the case, you can download a
<a href="http://www.netlib.org/blas/archives/">prebuilt Fortran77 reference
implementation BLAS library</a> or compile the
<a href="http://www.netlib.org/blas/blas.tgz">Fortran77 reference implementation
source code of the BLAS</a> from netlib.
<p>
Although a model implementation of the BLAS in available
from netlib in the <a href="http://www.netlib.org/blas/">blas directory</a>,
it is not expected to perform as well as a specially
tuned implementation on most high-performance computers -- on some machines
it may give much worse performance -- but it allows users to run LAPACK
software on machines that do not offer any other implementation of the
BLAS.

<p>
<IMG SRC="http://www.netlib.org/scalapack/html/gif/blue.gif"></p>
<a href="http://www.netlib.org/scalapack/html/scalapack_contributors.html">
<address>scalapack@cs.utk.edu</address></a>

</body>
</html>