File: README.org

package info (click to toggle)
python-numpysane 0.17-1
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 248 kB
  • sloc: python: 2,126; makefile: 33
file content (1558 lines) | stat: -rw-r--r-- 45,671 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
* NAME
numpysane: more-reasonable core functionality for numpy

* SYNOPSIS
#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a   = np.arange(6).reshape(2,3)
>>> b   = a + 100
>>> row = a[0,:] + 1000

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> row
array([1000, 1001, 1002])

>>> nps.glue(a,b, axis=-1)
array([[  0,   1,   2, 100, 101, 102],
       [  3,   4,   5, 103, 104, 105]])

>>> nps.glue(a,b,row, axis=-2)
array([[   0,    1,    2],
       [   3,    4,    5],
       [ 100,  101,  102],
       [ 103,  104,  105],
       [1000, 1001, 1002]])

>>> nps.cat(a,b)
array([[[  0,   1,   2],
        [  3,   4,   5]],

       [[100, 101, 102],
        [103, 104, 105]]])

>>> @nps.broadcast_define( (('n',), ('n',)) )
... def inner_product(a, b):
...     return a.dot(b)

>>> inner_product(a,b)
array([ 305, 1250])
#+END_EXAMPLE

* DESCRIPTION
Numpy is widely used, relatively polished, and has a wide range of libraries
available. At the same time, some of its very core functionality is strange,
confusing and just plain wrong. This is in contrast with PDL
(http://pdl.perl.org), which has a much more reasonable core, but a number of
higher-level warts, and a relative dearth of library support. This module
intends to improve the developer experience by providing alternate APIs to some
core numpy functionality that is much more reasonable, especially for those who
have used PDL in the past.

Instead of writing a new module (this module), it would be really nice to simply
patch numpy to give everybody the more reasonable behavior. I'd be very happy to
do that, but the issues lie with some very core functionality, and any changes
in behavior would break existing code. Any comments in how to achieve better
behaviors in a less forky manner are welcome.

Finally, if the existing system DOES make sense in some way that I'm simply not
understanding, I'm happy to listen. I have no intention to disparage anyone or
anything; I just want a more usable system for numerical computations.

The issues addressed by this module fall into two broad categories:

1. Incomplete broadcasting support
2. Strange, special-case-ridden rules for basic array manipulation, especially
   dealing with dimensionality

** Broadcasting
*** Problem
Numpy has a limited support for broadcasting
(http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), a generic way
to vectorize functions. When making a broadcasted call to a function, you pass
in arguments with the inputs to vectorize available in new dimensions, and the
broadcasting mechanism automatically calls the function multiple times as
needed, and reports the output as an array collecting all the results.

A basic example is an inner product: a function that takes in two
identically-sized vectors (1-dimensional arrays) and returns a scalar
(0-dimensional array). A broadcasted inner product function could take in two
arrays of shape (2,3,4), compute the 6 inner products of length-4 each, and
report the output in an array of shape (2,3). Numpy puts the most-significant
dimension at the end, which is why this isn't 12 inner products of length-2
each. This is an arbitrary design choice, which could have been made
differently: PDL puts the most-significant dimension at the front.

The user doesn't choose whether to use broadcasting or not: some functions
support it, and some do not. In PDL, broadcasting (called "threading" in that
system) is a pervasive concept throughout. A PDL user has an expectation that
every function can broadcast, and the documentation for every function is very
explicit about the dimensionality of the inputs and outputs. Any data above the
expected input dimensions is broadcast.

By contrast, in numpy very few functions know how to broadcast. On top of that,
the documentation is usually silent about the broadcasting status of a function
in question. And on top of THAT, broadcasting rules state that an array of
dimensions (n,m) is functionally identical to one of dimensions
(1,1,1,....1,n,m). Sadly, numpy does not respect its own broadcasting rules, and
many functions have special-case logic to create different behaviors for inputs
with different numbers of dimensions; and this creates unexpected results. The
effect of all this is a messy situation where the user is often not sure of the
exact behavior of the functions they're calling, and trial and error is required
to make the system do what one wants.

*** Solution
This module contains functionality to make any arbitrary function broadcastable.
This is invoked as a decorator, applied to the arbitrary user function. An
example:

#+BEGIN_EXAMPLE
>>> import numpysane as nps

>>> @nps.broadcast_define( (('n',), ('n',)) )
... def inner_product(a, b):
...     return a.dot(b)
#+END_EXAMPLE

Here we have a simple inner product function to compute ONE inner product. We
call 'broadcast_define' to add a broadcasting-aware wrapper that takes two 1D
vectors of length 'n' each (same 'n' for the two inputs). This new
'inner_product' function applies broadcasting, as needed:

#+BEGIN_EXAMPLE
>>> import numpy as np

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> inner_product(a,b)
array([ 305, 1250])
#+END_EXAMPLE

A detailed description of broadcasting rules is available in the numpy
documentation: http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

In short:

- The most significant dimension in a numpy array is the LAST one, so the
  prototype of an input argument must exactly match a given input's trailing
  shape. So a prototype shape of (a,b,c) accepts an argument shape of (......,
  a,b,c), with as many or as few leading dimensions as desired.
- The extra leading dimensions must be compatible across all the inputs. This
  means that each leading dimension must either
  - equal to 1
  - be missing (thus assumed to equal 1)
  - equal to some positive integer >1, consistent across all arguments
- The output is collected into an array that's sized as a superset of the
  above-prototype shape of each argument

More involved example: A function with input prototype ( (3,), ('n',3), ('n',),
('m',) ) given inputs of shape

#+BEGIN_SRC python
(1,5,    3)
(2,1,  8,3)
(        8)
(  5,    9)
#+END_SRC

will return an output array of shape (2,5, ...), where ... is the shape of each
output slice. Note again that the prototype dictates the TRAILING shape of the
inputs.

Another related function in this module broadcast_generate(). It's similar to
broadcast_define(), but instead of adding broadcasting-awareness to an existing
function, it simply generates tuples from a set of arguments according to a
given prototype.

Stock numpy has some rudimentary support for all this with its vectorize()
function, but it assumes only scalar inputs and outputs, which severaly limits
its usefulness.

*** New planned functionality

In addition to this basic broadcasting support, I'm planning the following:

- A C-level broadcast_define(). This would be the analogue of PDL::PP
  (http://pdl.perl.org/PDLdocs/PP.html). This flavor of broadcast_define() would
  be invoked by the build system to wrap C functions. It would implement
  broadcasting awareness in C code it generates, which should work more
  effectively for performance-sensitive inner loops. Currently broadcasting
  loops are all implemented in python, and this can get noticeably slow for
  large broadcasts.

- Automatic parallelization for broadcasted slices. Since each broadcasting loop
  is independent, this is a very natural place to add parallelism.

- Dimensions should support a symbolic declaration. For instance, one could want
  a function to accept an input of shape (n) and another of shape (n*n). There's
  no way to declare this currently, but there should be.

** Strangeness in core routines
*** Problem
There are some core numpy functions whose behavior is strange, full of special
cases and very confusing, at least to me. That makes it difficult to achieve
some very basic things. In the following examples, I use a function "arr" that
returns a numpy array with given dimensions:

#+BEGIN_EXAMPLE
>>> def arr(*shape):
...     product = reduce( lambda x,y: x*y, shape)
...     return np.arange(product).reshape(*shape)

>>> arr(1,2,3)
array([[[0, 1, 2],
        [3, 4, 5]]])

>>> arr(1,2,3).shape
(1, 2, 3)
#+END_EXAMPLE

The following sections are an incomplete list of the strange functionality I've
encountered.

**** Concatenation
A prime example of confusing functionality is the array concatenation routines.
Numpy has a number of functions to do this, each being strange.

***** hstack()
hstack() performs a "horizontal" concatenation. When numpy prints an array, this
is the last dimension (remember, the most significant dimensions in numpy are at
the end). So one would expect that this function concatenates arrays along this
last dimension. In the special case of 1D and 2D arrays, one would be right:

#+BEGIN_EXAMPLE
>>> np.hstack( (arr(3), arr(3))).shape
(6,)

>>> np.hstack( (arr(2,3), arr(2,3))).shape
(2, 6)
#+END_EXAMPLE

but in any other case, one would be wrong:

#+BEGIN_EXAMPLE
>>> np.hstack( (arr(1,2,3), arr(1,2,3))).shape
(1, 4, 3)     <------ I expect (1, 2, 6)

>>> np.hstack( (arr(1,2,3), arr(1,2,4))).shape
[exception]   <------ I expect (1, 2, 7)

>>> np.hstack( (arr(3), arr(1,3))).shape
[exception]   <------ I expect (1, 6)

>>> np.hstack( (arr(1,3), arr(3))).shape
[exception]   <------ I expect (1, 6)
#+END_EXAMPLE

I think the above should all succeed, and should produce the shapes as
indicated. Cases such as "np.hstack( (arr(3), arr(1,3)))" are maybe up for
debate, but broadcasting rules allow adding as many extra length-1 dimensions as
we want without changing the meaning of the object, so I claim this should work.
Either way, if you print out the operands for any of the above, you too would
expect a "horizontal" stack() to work as stated above.

It turns out that normally hstack() concatenates along axis=1, unless the first
argument only has one dimension, in which case axis=0 is used. This is 100%
wrong in a system where the most significant dimension is the last one, unless
you assume that everyone has only 2D arrays, where the last dimension and the
second dimension are the same.

The correct way to do this is to concatenate along axis=-1. It works for
n-dimensionsal objects, and doesn't require the special case logic for
1-dimensional objects that hstack() has.

***** vstack()
Similarly, vstack() performs a "vertical" concatenation. When numpy prints an
array, this is the second-to-last dimension (remember, the most significant
dimensions in numpy are at the end). So one would expect that this function
concatenates arrays along this second-to-last dimension. In the special
case of 1D and 2D arrays, one would be right:

#+BEGIN_EXAMPLE
>>> np.vstack( (arr(2,3), arr(2,3))).shape
(4, 3)

>>> np.vstack( (arr(3), arr(3))).shape
(2, 3)

>>> np.vstack( (arr(1,3), arr(3))).shape
(2, 3)

>>> np.vstack( (arr(3), arr(1,3))).shape
(2, 3)

>>> np.vstack( (arr(2,3), arr(3))).shape
(3, 3)
#+END_EXAMPLE

Note that this function appears to tolerate some amount of shape mismatches. It
does it in a form one would expect, but given the state of the rest of this
system, I found it surprising. For instance "np.hstack( (arr(1,3), arr(3)))"
fails, so one would think that "np.vstack( (arr(1,3), arr(3)))" would fail too.

And once again, adding more dimensions make it confused, for the same reason:

#+BEGIN_EXAMPLE
>>> np.vstack( (arr(1,2,3), arr(2,3))).shape
[exception]   <------ I expect (1, 4, 3)

>>> np.vstack( (arr(1,2,3), arr(1,2,3))).shape
(2, 2, 3)     <------ I expect (1, 4, 3)
#+END_EXAMPLE

Similarly to hstack(), vstack() concatenates along axis=0, which is "vertical"
only for 2D arrays, but not for any others. And similarly to hstack(), the 1D
case has special-cased logic to work properly.

The correct way to do this is to concatenate along axis=-2. It works for
n-dimensionsal objects, and doesn't require the special case for 1-dimensional
objects that vstack() has.

***** dstack()
I'll skip the detailed description, since this is similar to hstack() and
vstack(). The intent was to concatenate across axis=-3, but the implementation
takes axis=2 instead. This is wrong, as before. And I find it strange that these
3 functions even exist, since they are all special-cases: the concatenation axis
should be an argument, and at most, the edge special case (hstack()) should
exist. This brings us to the next function:

***** concatenate()
This is a more general function, and unlike hstack(), vstack() and dstack(), it
takes as input a list of arrays AND the concatenation dimension. It accepts
negative concatenation dimensions to allow us to count from the end, so things
should work better. And in many ways that failed previously, they do:

#+BEGIN_EXAMPLE
>>> np.concatenate( (arr(1,2,3), arr(1,2,3)), axis=-1).shape
(1, 2, 6)

>>> np.concatenate( (arr(1,2,3), arr(1,2,4)), axis=-1).shape
(1, 2, 7)

>>> np.concatenate( (arr(1,2,3), arr(1,2,3)), axis=-2).shape
(1, 4, 3)
#+END_EXAMPLE

But many things still don't work as I would expect:

#+BEGIN_EXAMPLE
>>> np.concatenate( (arr(1,3), arr(3)), axis=-1).shape
[exception]   <------ I expect (1, 6)

>>> np.concatenate( (arr(3), arr(1,3)), axis=-1).shape
[exception]   <------ I expect (1, 6)

>>> np.concatenate( (arr(1,3), arr(3)), axis=-2).shape
[exception]   <------ I expect (3, 3)

>>> np.concatenate( (arr(3), arr(1,3)), axis=-2).shape
[exception]   <------ I expect (2, 3)

>>> np.concatenate( (arr(2,3), arr(2,3)), axis=-3).shape
[exception]   <------ I expect (2, 2, 3)
#+END_EXAMPLE

This function works as expected only if

- All inputs have the same number of dimensions
- All inputs have a matching shape, except for the dimension along which we're
  concatenating
- All inputs HAVE the dimension along which we're concatenating

A legitimate use case that violates these conditions: I have an object that
contains N 3D vectors, and I want to add another 3D vector to it. This is
essentially the first failing example above.

***** stack()
The name makes it sound exactly like concatenate(), and it takes the same
arguments, but it is very different. stack() requires that all inputs have
EXACTLY the same shape. It then concatenates all the inputs along a new
dimension, and places that dimension in the location given by the 'axis' input.
If this is the exact type of concatenation you want, this function works fine.
But it's one of many things a user may want to do.

**** inner() and dot()
Another arbitrary example of a strange API is np.dot() and np.inner(). In a
real-valued n-dimensional Euclidean space, a "dot product" is just another name
for an "inner product". Numpy disagrees.

It looks like np.dot() is matrix multiplication, with some wonky behaviors when
given higher-dimension objects, and with some special-case behaviors for
1-dimensional and 0-dimensional objects:

#+BEGIN_EXAMPLE
>>> np.dot( arr(4,5,2,3), arr(3,5)).shape
(4, 5, 2, 5) <--- expected result for a broadcasted matrix multiplication

>>> np.dot( arr(3,5), arr(4,5,2,3)).shape
[exception] <--- np.dot() is not commutative.
                 Expected for matrix multiplication, but not for a dot
                 product

>>> np.dot( arr(4,5,2,3), arr(1,3,5)).shape
(4, 5, 2, 1, 5) <--- don't know where this came from at all

>>> np.dot( arr(4,5,2,3), arr(3)).shape
(4, 5, 2) <--- 1D special case. This is a dot product.

>>> np.dot( arr(4,5,2,3), 3).shape
(4, 5, 2, 3) <--- 0D special case. This is a scaling.
#+END_EXAMPLE

It looks like np.inner() is some sort of quasi-broadcastable inner product, also
with some funny dimensioning rules. In many cases it looks like np.dot(a,b) is
the same as np.inner(a, transpose(b)) where transpose() swaps the last two
dimensions:


#+BEGIN_EXAMPLE
>>> np.inner( arr(4,5,2,3), arr(5,3)).shape
(4, 5, 2, 5) <--- All the length-3 inner products collected into a shape
                  with not-quite-broadcasting rules

>>> np.inner( arr(5,3), arr(4,5,2,3)).shape
(5, 4, 5, 2) <--- np.inner() is not commutative. Unexpected
                  for an inner product

>>> np.inner( arr(4,5,2,3), arr(1,5,3)).shape
(4, 5, 2, 1, 5) <--- No idea

>>> np.inner( arr(4,5,2,3), arr(3)).shape
(4, 5, 2) <--- 1D special case. This is a dot product.

>>> np.inner( arr(4,5,2,3), 3).shape
(4, 5, 2, 3) <--- 0D special case. This is a scaling.
#+END_EXAMPLE

**** atleast_xd()
Numpy has 3 special-case functions atleast_1d(), atleast_2d() and atleast_3d().
For 4d and higher, you need to do something else. As expected by now, these do
surprising things:

#+BEGIN_EXAMPLE
>>> np.atleast_3d( arr(3)).shape
(1, 3, 1)
#+END_EXAMPLE

I don't know when this is what I would want, so we move on.


*** Solution
This module introduces new functions that can be used for this core
functionality instead of the builtin numpy functions. These new functions work
in ways that (I think) are more intuitive and more reasonable. They do not refer
to anything being "horizontal" or "vertical", nor do they talk about "rows" or
"columns"; these concepts simply don't apply in a generic N-dimensional system.
These functions are very explicit about the dimensionality of the
inputs/outputs, and fit well into a broadcasting-aware system. Furthermore, the
names and semantics of these new functions come directly from PDL, which is more
consistent in this area.

Since these functions assume that broadcasting is an important concept in the
system, the given axis indices should be counted from the most significant
dimension: the last dimension in numpy. This means that where an axis index is
specified, negative indices are encouraged. glue() forbids axis>=0 outright.


Example for further justification:

An array containing N 3D vectors would have shape (N,3). Another array
containing a single 3D vector would have shape (3). Counting the dimensions from
the end, each vector is indexed in dimension -1. However, counting from the
front, the vector is indexed in dimension 0 or 1, depending on which of the two
arrays we're looking at. If we want to add the single vector to the array
containing the N vectors, and we mistakenly try to concatenate along the first
dimension, it would fail if N != 3. But if we're unlucky, and N=3, then we'd get
a nonsensical output array of shape (3,4). Why would an array of N 3D vectors
have shape (N,3) and not (3,N)? Because if we apply python iteration to it, we'd
expect to get N iterates of arrays with shape (3,) each, and numpy iterates from
the first dimension:

#+BEGIN_EXAMPLE
>>> a = np.arange(2*3).reshape(2,3)

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> [x for x in a]
[array([0, 1, 2]), array([3, 4, 5])]
#+END_EXAMPLE

New functions this module provides (documented fully in the next section):

**** glue
Concatenates arrays along a given axis ('axis' must be given in a kwarg).
Implicit length-1 dimensions are added at the start as needed. Dimensions other
than the glueing axis must match exactly.

**** cat
Concatenate a given list of arrays along a new least-significant (leading) axis.
Again, implicit length-1 dimensions are added, and the resulting shapes must
match, and no data duplication occurs.

**** clump
Reshapes the array by grouping together 'n' dimensions, where 'n' is given in a
kwarg. If 'n' > 0, then n leading dimensions are clumped; if 'n' < 0, then -n
trailing dimensions are clumped

**** atleast_dims
Adds length-1 dimensions at the front of an array so that all the given
dimensions are in-bounds. Given axis<0 can expand the shape; given axis>=0 MUST
already be in-bounds. This preserves the alignment of the most-significant axis
index.

**** mv
Moves a dimension from one position to another

**** xchg
Exchanges the positions of two dimensions

**** transpose
Reverses the order of the two most significant dimensions in an array. The whole
array is seen as being an array of 2D matrices, each matrix living in the 2 most
significant dimensions, which implies this definition.

**** dummy
Adds a single length-1 dimension at the given position

**** reorder
Completely reorders the dimensions in an array

**** dot
Broadcast-aware non-conjugating dot product. Identical to inner

**** vdot
Broadcast-aware conjugating dot product

**** inner
Broadcast-aware inner product. Identical to dot

**** outer
Broadcast-aware outer product.

**** norm2
Broadcast-aware 2-norm. norm2(x) is identical to inner(x,x)

**** trace
Broadcast-aware trace.

**** matmult
Broadcast-aware matrix multiplication

*** New planned functionality
The functions listed above are a start, but more will be added with time.

* INTERFACE
** broadcast_define()
Vectorizes an arbitrary function, expecting input as in the given prototype.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> @nps.broadcast_define( (('n',), ('n',)) )
... def inner_product(a, b):
...     return a.dot(b)

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> inner_product(a,b)
array([ 305, 1250])
#+END_EXAMPLE


The prototype defines the dimensionality of the inputs. In the inner product
example above, the input is two 1D n-dimensional vectors. In particular, the
'n' is the same for the two inputs. This function is intended to be used as
a decorator, applied to a function defining the operation to be vectorized.
Each element in the prototype list refers to each input, in order. In turn,
each such element is a list that describes the shape of that input. Each of
these shape descriptors can be any of

- a positive integer, indicating an input dimension of exactly that length
- a string, indicating an arbitrary, but internally consistent dimension

The normal numpy broadcasting rules (as described elsewhere) apply. In
summary:

- Dimensions are aligned at the end of the shape list, and must match the
  prototype

- Extra dimensions left over at the front must be consistent for all the
  input arguments, meaning:

  - All dimensions !=1 must be identical
  - Missing dimensions are implicitly set to 1

- The output has a shape where
  - The trailing dimensions are whatever the function being broadcasted
    outputs
  - The leading dimensions come from the extra dimensions in the inputs

Scalars are represented as 0-dimensional numpy arrays: arrays with shape (),
and these broadcast as one would expect:

#+BEGIN_EXAMPLE
>>> @nps.broadcast_define( (('n',), ('n',), ()))
... def scaled_inner_product(a, b, scale):
...     return a.dot(b)*scale

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100
>>> scale = np.array((10,100))

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> scale
array([ 10, 100])

>>> scaled_inner_product(a,b,scale)
array([[  3050],
       [125000]])
#+END_EXAMPLE

Let's look at a more involved example. Let's say we have a function that
takes a set of points in R^2 and a single center point in R^2, and finds a
best-fit least-squares line that passes through the given center point. Let
it return a 3D vector containing the slope, y-intercept and the RMS residual
of the fit. This broadcasting-enabled function can be defined like this:

#+BEGIN_SRC python
import numpy as np
import numpysane as nps

@nps.broadcast_define( (('n',2), (2,)) )
def fit(xy, c):
    # line-through-origin-model: y = m*x
    # E = sum( (m*x - y)**2 )
    # dE/dm = 2*sum( (m*x-y)*x ) = 0
    # ----> m = sum(x*y)/sum(x*x)
    x,y = (xy - c).transpose()
    m = np.sum(x*y) / np.sum(x*x)
    err = m*x - y
    err **= 2
    rms = np.sqrt(err.mean())
    # I return m,b because I need to translate the line back
    b = c[1] - m*c[0]

    return np.array((m,b,rms))
#+END_SRC

And I can use broadcasting to compute a number of these fits at once. Let's
say I want to compute 4 different fits of 5 points each. I can do this:

#+BEGIN_SRC python
n = 5
m = 4
c = np.array((20,300))
xy = np.arange(m*n*2, dtype=np.float64).reshape(m,n,2) + c
xy += np.random.rand(*xy.shape)*5

res = fit( xy, c )
mb  = res[..., 0:2]
rms = res[..., 2]
print "RMS residuals: {}".format(rms)
#+END_SRC

Here I had 4 different sets of points, but a single center point c. If I
wanted 4 different center points, I could pass c as an array of shape (4,2).
I can use broadcasting to plot all the results (the points and the fitted
lines):

#+BEGIN_SRC python
import gnuplotlib as gp

gp.plot( *nps.mv(xy,-1,0), _with='linespoints',
         equation=['{}*x + {}'.format(mb_single[0],
                                      mb_single[1]) for mb_single in mb],
         unset='grid', square=1)
#+END_SRC

The examples above all create a separate output array for each broadcasted
slice, and copy the contents from each such slice into the large output
array that contains all the results. This is inefficient, and it is possible
to pre-allocate an array to forgo these extra allocations and copies. There
are several settings to control this. If the function being broadcasted can
write its output to a given array instead of creating a new one, most of the
inefficiency goes away. broadcast_define() supports the case where this
function takes this array in a kwarg: the name of this kwarg can be given to
broadcast_define() like so:

#+BEGIN_SRC python
@nps.broadcast_define( ....., out_kwarg = "out" )
def func( ....., out):
    .....
    out[:] = result
#+END_SRC

In order for broadcast_define() to pass such an output array to the inner
function, this output array must be available, which means that it must be
given to us somehow, or we must create it.

The most efficient way to make a broadcasted call is to create the full
output array beforehand, and to pass that to the broadcasted function. In
this case, nothing extra will be allocated, and no unnecessary copies will
be made. This can be done like this:

#+BEGIN_SRC python
@nps.broadcast_define( (('n',), ('n',)), ....., out_kwarg = "out" )
def inner_product(a, b, out):
    .....
    out.setfield(a.dot(b), out.dtype)
    return out

out = np.empty((2,4), float)
inner_product( np.arange(3), np.arange(2*4*3).reshape(2,4,3), out=out)
#+END_SRC

In this example, the caller knows that it's calling an inner_product
function, and that the shape of each output slice would be (). The caller
also knows the input dimensions and that we have an extra broadcasting
dimension (2,4), so the output array will have shape (2,4) + () = (2,4).
With this knowledge, the caller preallocates the array, and passes it to the
broadcasted function call. Furthermore, in this case the inner function will
be called with an output array EVERY time, and this is the only mode the
inner function needs to support.

If the caller doesn't know (or doesn't want to pre-compute) the shape of the
output, it can let the broadcasting machinery create this array for them. In
order for this to be possible, the shape of the output should be
pre-declared, and the dtype of the output should be known:

#+BEGIN_SRC python
@nps.broadcast_define( (('n',), ('n',)),
                       (),
                       out_kwarg = "out" )
def inner_product(a, b, out):
    .....
    out.setfield(a.dot(b), out.dtype)
    return out

out = inner_product( np.arange(3), np.arange(2*4*3).reshape(2,4,3), dtype=int)
#+END_SRC

Note that the caller didn't need to specify the prototype of the output or
the extra broadcasting dimensions (output prototype is in the
broadcast_define() call, but not the inner_product() call). Specifying the
dtype here is optional: it defaults to float if omitted. If we want the
output array to be pre-allocated, the output prototype (it is () in this
example) is required: we must know the shape of the output array in order to
create it.

Without a declared output prototype, we can still make mostly- efficient
calls: the broadcasting mechanism can call the inner function for the first
slice as we showed earlier, by creating a new array for the slice. This new
array required an extra allocation and copy, but it contains the required
shape information. This infomation will be used to allocate the output, and
the subsequent calls to the inner function will be efficient:

#+BEGIN_SRC python
@nps.broadcast_define( (('n',), ('n',)),
                       out_kwarg = "out" )
def inner_product(a, b, out=None):
    .....
    if out is None:
        return a.dot(b)
    out.setfield(a.dot(b), out.dtype)
    return out

out = inner_product( np.arange(3), np.arange(2*4*3).reshape(2,4,3))
#+END_SRC

Here we were slighly inefficient, but the ONLY required extra specification
was out_kwarg: that's mostly all you need. Also it is important to note that
in this case the inner function is called both with passing it an output
array to fill in, and with asking it to create a new one (by passing
out=None to the inner function). This inner function then must support both
modes of operation. If the inner function does not support filling in an
output array, none of these efficiency improvements are possible.

broadcast_define() is analogous to thread_define() in PDL.

** broadcast_generate()
A generator that produces broadcasted slices

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> for s in nps.broadcast_generate( (('n',), ('n',)), (a,b)):
...     print "slice: {}".format(s)
slice: (array([0, 1, 2]), array([100, 101, 102]))
slice: (array([3, 4, 5]), array([103, 104, 105]))
#+END_EXAMPLE

** glue()
Concatenates a given list of arrays along the given 'axis' keyword argument.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100
>>> row = a[0,:] + 1000

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> row
array([1000, 1001, 1002])

>>> nps.glue(a,b, axis=-1)
array([[  0,   1,   2, 100, 101, 102],
       [  3,   4,   5, 103, 104, 105]])

# empty arrays ignored when glueing. Useful for initializing an accumulation
>>> nps.glue(a,b, np.array(()), axis=-1)
array([[  0,   1,   2, 100, 101, 102],
       [  3,   4,   5, 103, 104, 105]])

>>> nps.glue(a,b,row, axis=-2)
array([[   0,    1,    2],
       [   3,    4,    5],
       [ 100,  101,  102],
       [ 103,  104,  105],
       [1000, 1001, 1002]])

>>> nps.glue(a,b, axis=-3)
array([[[  0,   1,   2],
        [  3,   4,   5]],

       [[100, 101, 102],
        [103, 104, 105]]])
#+END_EXAMPLE

The 'axis' must be given in a keyword argument.

In order to count dimensions from the inner-most outwards, this function accepts
only negative axis arguments. This is because numpy broadcasts from the last
dimension, and the last dimension is the inner-most in the (usual) internal
storage scheme. Allowing glue() to look at dimensions at the start would allow
it to unalign the broadcasting dimensions, which is never what you want.

To glue along the last dimension, pass axis=-1; to glue along the second-to-last
dimension, pass axis=-2, and so on.

Unlike in PDL, this function refuses to create duplicated data to make the
shapes fit. In my experience, this isn't what you want, and can create bugs.
For instance, PDL does this:

#+BEGIN_SRC python
pdl> p sequence(3,2)
[
 [0 1 2]
 [3 4 5]
]

pdl> p sequence(3)
[0 1 2]

pdl> p PDL::glue( 0, sequence(3,2), sequence(3) )
[
 [0 1 2 0 1 2]   <--- Note the duplicated "0,1,2"
 [3 4 5 0 1 2]
]
#+END_SRC

while numpysane.glue() does this:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> b = a[0:1,:]


>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[0, 1, 2]])

>>> nps.glue(a,b,axis=-1)
[exception]
#+END_EXAMPLE

Finally, this function adds as many length-1 dimensions at the front as
required. Note that this does not create new data, just new degenerate
dimensions. Example:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> res = nps.glue(a,b, axis=-5)
>>> res
array([[[[[  0,   1,   2],
          [  3,   4,   5]]]],



       [[[[100, 101, 102],
          [103, 104, 105]]]]])

>>> res.shape
(2, 1, 1, 2, 3)
#+END_EXAMPLE

In numpysane older than 0.10 the semantics were slightly different: the axis
kwarg was optional, and glue(*args) would glue along a new leading
dimension, and thus would be equivalent to cat(*args). This resulted in very
confusing error messages if the user accidentally omitted the kwarg. To
request the legacy behavior, do

#+BEGIN_SRC python
nps.glue.legacy_version = '0.9'
#+END_SRC

** cat()
Concatenates a given list of arrays along a new first (outer) dimension.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> b = a + 100
>>> c = a - 100

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[100, 101, 102],
       [103, 104, 105]])

>>> c
array([[-100,  -99,  -98],
       [ -97,  -96,  -95]])

>>> res = nps.cat(a,b,c)
>>> res
array([[[   0,    1,    2],
        [   3,    4,    5]],

       [[ 100,  101,  102],
        [ 103,  104,  105]],

       [[-100,  -99,  -98],
        [ -97,  -96,  -95]]])

>>> res.shape
(3, 2, 3)

>>> [x for x in res]
[array([[0, 1, 2],
        [3, 4, 5]]),
 array([[100, 101, 102],
        [103, 104, 105]]),
 array([[-100,  -99,  -98],
        [ -97,  -96,  -95]])]
#+END_EXAMPLE

This function concatenates the input arrays into an array shaped like the
highest-dimensioned input, but with a new outer (at the start) dimension.
The concatenation axis is this new dimension.

As usual, the dimensions are aligned along the last one, so broadcasting
will continue to work as expected. Note that this is the opposite operation
from iterating a numpy array; see the example above.

** clump()
Groups the given n dimensions together.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpysane as nps
>>> nps.clump( arr(2,3,4), n = -2).shape
(2, 12)
#+END_EXAMPLE

Reshapes the array by grouping together 'n' dimensions, where 'n' is given
in a kwarg. If 'n' > 0, then n leading dimensions are clumped; if 'n' < 0,
then -n trailing dimensions are clumped

So for instance, if x.shape is (2,3,4) then nps.clump(x, n = -2).shape is
(2,12) and nps.clump(x, n = 2).shape is (6, 4)

In numpysane older than 0.10 the semantics were different: n > 0 was
required, and we ALWAYS clumped the trailing dimensions. Thus the new
clump(-n) is equivalent to the old clump(n). To request the legacy behavior,
do

#+BEGIN_SRC python
nps.clump.legacy_version = '0.9'
#+END_SRC

** atleast_dims()
Returns an array with extra length-1 dimensions to contain all given axes.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6).reshape(2,3)
>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> nps.atleast_dims(a, -1).shape
(2, 3)

>>> nps.atleast_dims(a, -2).shape
(2, 3)

>>> nps.atleast_dims(a, -3).shape
(1, 2, 3)

>>> nps.atleast_dims(a, 0).shape
(2, 3)

>>> nps.atleast_dims(a, 1).shape
(2, 3)

>>> nps.atleast_dims(a, 2).shape
[exception]

>>> l = [-3,-2,-1,0,1]
>>> nps.atleast_dims(a, l).shape
(1, 2, 3)

>>> l
[-3, -2, -1, 1, 2]
#+END_EXAMPLE

If the given axes already exist in the given array, the given array itself
is returned. Otherwise length-1 dimensions are added to the front until all
the requested dimensions exist. The given axis>=0 dimensions MUST all be
in-bounds from the start, otherwise the most-significant axis becomes
unaligned; an exception is thrown if this is violated. The given axis<0
dimensions that are out-of-bounds result in new dimensions added at the
front.

If new dimensions need to be added at the front, then any axis>=0 indices
become offset. For instance:

#+BEGIN_EXAMPLE
>>> x.shape
(2, 3, 4)

>>> [x.shape[i] for i in (0,-1)]
[2, 4]

>>> x = nps.atleast_dims(x, 0, -1, -5)
>>> x.shape
(1, 1, 2, 3, 4)

>>> [x.shape[i] for i in (0,-1)]
[1, 4]
#+END_EXAMPLE

Before the call, axis=0 refers to the length-2 dimension and axis=-1 refers
to the length=4 dimension. After the call, axis=-1 refers to the same
dimension as before, but axis=0 now refers to a new length=1 dimension. If
it is desired to compensate for this offset, then instead of passing the
axes as separate arguments, pass in a single list of the axes indices. This
list will be modified to offset the axis>=0 appropriately. Ideally, you only
pass in axes<0, and this does not apply. Doing this in the above example:

#+BEGIN_EXAMPLE
>>> l
[0, -1, -5]

>>> x.shape
(2, 3, 4)

>>> [x.shape[i] for i in (l[0],l[1])]
[2, 4]

>>> x=nps.atleast_dims(x, l)
>>> x.shape
(1, 1, 2, 3, 4)

>>> l
[2, -1, -5]

>>> [x.shape[i] for i in (l[0],l[1])]
[2, 4]
#+END_EXAMPLE

We passed the axis indices in a list, and this list was modified to reflect
the new indices: The original axis=0 becomes known as axis=2. Again, if you
pass in only axis<0, then you don't need to care about this.

** mv()
Moves a given axis to a new position. Similar to numpy.moveaxis().

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(24).reshape(2,3,4)
>>> a.shape
(2, 3, 4)

>>> nps.mv( a, -1, 0).shape
(4, 2, 3)

>>> nps.mv( a, -1, -5).shape
(4, 1, 1, 2, 3)

>>> nps.mv( a, 0, -5).shape
(2, 1, 1, 3, 4)
#+END_EXAMPLE

New length-1 dimensions are added at the front, as required, and any axis>=0
that are passed in refer to the array BEFORE these new dimensions are added.

** xchg()
Exchanges the positions of the two given axes. Similar to numpy.swapaxes()

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(24).reshape(2,3,4)
>>> a.shape
(2, 3, 4)

>>> nps.xchg( a, -1, 0).shape
(4, 3, 2)

>>> nps.xchg( a, -1, -5).shape
(4, 1, 2, 3, 1)

>>> nps.xchg( a, 0, -5).shape
(2, 1, 1, 3, 4)
#+END_EXAMPLE

New length-1 dimensions are added at the front, as required, and any axis>=0
that are passed in refer to the array BEFORE these new dimensions are added.

** transpose()
Reverses the order of the last two dimensions.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(24).reshape(2,3,4)
>>> a.shape
(2, 3, 4)

>>> nps.transpose(a).shape
(2, 4, 3)

>>> nps.transpose( np.arange(3) ).shape
(3, 1)
#+END_EXAMPLE

A "matrix" is generally seen as a 2D array that we can transpose by looking
at the 2 dimensions in the opposite order. Here we treat an n-dimensional
array as an n-2 dimensional object containing 2D matrices. As usual, the
last two dimensions contain the matrix.

New length-1 dimensions are added at the front, as required, meaning that 1D
input of shape (n,) is interpreted as a 2D input of shape (1,n), and the
transpose is 2 of shape (n,1).

** dummy()
Adds a single length-1 dimension at the given position.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(24).reshape(2,3,4)
>>> a.shape
(2, 3, 4)

>>> nps.dummy(a, 0).shape
(1, 2, 3, 4)

>>> nps.dummy(a, 1).shape
(2, 1, 3, 4)

>>> nps.dummy(a, -1).shape
(2, 3, 4, 1)

>>> nps.dummy(a, -2).shape
(2, 3, 1, 4)

>>> nps.dummy(a, -5).shape
(1, 1, 2, 3, 4)
#+END_EXAMPLE

This is similar to numpy.expand_dims(), but handles out-of-bounds dimensions
better. New length-1 dimensions are added at the front, as required, and any
axis>=0 that are passed in refer to the array BEFORE these new dimensions
are added.

** reorder()
Reorders the dimensions of an array.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(24).reshape(2,3,4)
>>> a.shape
(2, 3, 4)

>>> nps.reorder( a, 0, -1, 1 ).shape
(2, 4, 3)

>>> nps.reorder( a, -2 , -1, 0 ).shape
(3, 4, 2)

>>> nps.reorder( a, -4 , -2, -5, -1, 0 ).shape
(1, 3, 1, 4, 2)
#+END_EXAMPLE

This is very similar to numpy.transpose(), but handles out-of-bounds
dimensions much better.

New length-1 dimensions are added at the front, as required, and any axis>=0
that are passed in refer to the array BEFORE these new dimensions are added.

** dot()
Non-conjugating dot product of two 1-dimensional n-long vectors.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(3)
>>> b = a+5
>>> a
array([0, 1, 2])

>>> b
array([5, 6, 7])

>>> nps.dot(a,b)
20
#+END_EXAMPLE

this is identical to numpysane.inner(). for a conjugating version of this
function, use nps.vdot(). note that the numpy dot() has some special
handling when its dot() is given more than 1-dimensional input. this
function has no special handling: normal broadcasting rules are applied.

** vdot()
Conjugating dot product of two 1-dimensional n-long vectors.

vdot(a,b) is equivalent to dot(np.conj(a), b)

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.array(( 1 + 2j, 3 + 4j, 5 + 6j))
>>> b = a+5
>>> a
array([ 1.+2.j,  3.+4.j,  5.+6.j])

>>> b
array([  6.+2.j,   8.+4.j,  10.+6.j])

>>> nps.vdot(a,b)
array((136-60j))

>>> nps.dot(a,b)
array((24+148j))
#+END_EXAMPLE

For a non-conjugating version of this function, use nps.dot(). Note that the
numpy vdot() has some special handling when its vdot() is given more than
1-dimensional input. THIS function has no special handling: normal
broadcasting rules are applied.

** outer()
Outer product of two 1-dimensional n-long vectors.

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(3)
>>> b = a+5
>>> a
array([0, 1, 2])

>>> b
array([5, 6, 7])

>>> nps.outer(a,b)
array([[ 0,  0,  0],
       [ 5,  6,  7],
       [10, 12, 14]])
#+END_EXAMPLE

This function is broadcast-aware through numpysane.broadcast_define().
The expected inputs have input prototype:

#+BEGIN_SRC python
(('n',), ('n',))
#+END_SRC

and output prototype

#+BEGIN_SRC python
('n', 'n')
#+END_SRC

The first 2 positional arguments will broadcast. The trailing shape of
those arguments must match the input prototype; the leading shape must follow
the standard broadcasting rules. Positional arguments past the first 2 and
all the keyword arguments are passed through untouched.

** norm2()
Broadcast-aware 2-norm. norm2(x) is identical to inner(x,x)

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(3)
>>> a
array([0, 1, 2])

>>> nps.norm2(a)
5
#+END_EXAMPLE

This is a convenience function to compute a 2-norm

** trace()
Broadcast-aware trace

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(3*4*4).reshape(3,4,4)
>>> a
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11],
        [12, 13, 14, 15]],

       [[16, 17, 18, 19],
        [20, 21, 22, 23],
        [24, 25, 26, 27],
        [28, 29, 30, 31]],

       [[32, 33, 34, 35],
        [36, 37, 38, 39],
        [40, 41, 42, 43],
        [44, 45, 46, 47]]])

>>> nps.trace(a)
array([ 30,  94, 158])
#+END_EXAMPLE

This function is broadcast-aware through numpysane.broadcast_define().
The expected inputs have input prototype:

#+BEGIN_SRC python
(('n', 'n'),)
#+END_SRC

and output prototype

#+BEGIN_SRC python
()
#+END_SRC

The first 1 positional arguments will broadcast. The trailing shape of
those arguments must match the input prototype; the leading shape must follow
the standard broadcasting rules. Positional arguments past the first 1 and
all the keyword arguments are passed through untouched.

** matmult2()
Multiplication of two matrices

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6) .reshape(2,3)
>>> b = np.arange(12).reshape(3,4)

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])

>>> nps.matmult2(a,b)
array([[20, 23, 26, 29],
       [56, 68, 80, 92]])
#+END_EXAMPLE

This multiplies exactly 2 matrices, and the output object can be given in
the 'out' argument. If the usual case where the you let numpysane create and
return the result, you can use numpysane.matmult() instead. An advantage of
that function is that it can multiply an arbitrary N matrices together, not
just 2.

** matmult()
Multiplication of N matrices

Synopsis:

#+BEGIN_EXAMPLE
>>> import numpy as np
>>> import numpysane as nps

>>> a = np.arange(6) .reshape(2,3)
>>> b = np.arange(12).reshape(3,4)
>>> c = np.arange(4) .reshape(4,1)

>>> a
array([[0, 1, 2],
       [3, 4, 5]])

>>> b
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])

>>> c
array([[0],
       [1],
       [2],
       [3]])

>>> nps.matmult(a,b,c)
array([[162],
       [504]])
#+END_EXAMPLE

This multiplies N matrices together by repeatedly calling matmult2() for
each adjacent pair. Unlike matmult2(), the arguments MUST all be matrices to
multiply, an 'out' kwarg for the output is not supported here.

* COMPATIBILITY

Python 2 and Python 3 should both be supported. Please report a bug if either
one doesn't work.

* REPOSITORY

https://github.com/dkogan/numpysane

* AUTHOR

Dima Kogan <dima@secretsauce.net>

* LICENSE AND COPYRIGHT

Copyright 2016-2017 Dima Kogan.

This program is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License (version 3 or higher) as
published by the Free Software Foundation

See https://www.gnu.org/licenses/lgpl.html