File: HISTORY.txt

package info (click to toggle)
theano 1.0.3+dfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: buster, sid
  • size: 30,752 kB
  • sloc: python: 141,182; ansic: 9,505; makefile: 259; sh: 214; pascal: 81
file content (3575 lines) | stat: -rw-r--r-- 148,761 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575

.. _HISTORY:

=================
Old Release Notes
=================

Theano 1.0.2 (23rd of May, 2018)
====================================

This is a maintenance release of Theano, version ``1.0.2``, with no
new features, but some important bug fixes.

We recommend that everybody update to this version.

Highlights (since 1.0.1):

 - Theano should work under PyPy now (this is experimental).
 - Update for cuDNN 7.1 RNN API changes.
 - Fix for a crash related to mixed dtypes with cuDNN convolutions.
 - MAGMA should work in more cases without manual config.
 - Handle reductions with non-default accumulator dtype better on the GPU.
 - Improvements to the test suite so that it fails less often due to
   random chance.

A total of 6 people contributed to this release since ``1.0.1``:

 - Frederic Bastien
 - Steven Bocco
 - Jon Haygood
 - Arnaud Bergeron
 - Jordan Melendez
 - Desiree Vogt-Lee
 - Garming Sam
 - Pascal Lamblin
 - Vincent Dumoulin
 - Glexin
 - Simon Lefrancois


Theano 1.0.1 (6th of December, 2017)
====================================

This is a maintenance release of Theano, version ``1.0.1``, with no
new features, but some important bug fixes.

We recommend that everybody update to this version.

Highlights (since 1.0.0):

 - Fixed compilation and improved float16 support for topK on GPU

   - **NB**: topK support on GPU is experimental and may not work for
             large input sizes on certain GPUs

 - Fixed cuDNN reductions when axes to reduce have size ``1``
 - Attempted to prevent re-initialization of the GPU in a child process
 - Fixed support for temporary paths with spaces in Theano initialization
 - Spell check pass on the documentation

A total of 6 people contributed to this release since ``1.0.0``:

 - Frederic Bastien
 - Steven Bocco
 - Arnaud Bergeron
 - Sam Johnson
 - Edward Betts
 - Simon Lefrancois


Theano 1.0.0 (15th of November, 2017)
=====================================

This is a final release of Theano, version ``1.0.0``, with a lot of
new features, interface changes, improvements and bug fixes.

We recommend that everybody update to this version.

Highlights (since 0.9.0):
 - Announcing that `MILA will stop developing Theano <https://groups.google.com/d/msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ>`_
 - conda packages now available and updated in our own conda channel ``mila-udem``
   To install it: ``conda install -c mila-udem theano pygpu``
 - Support NumPy ``1.13``
 - Support pygpu ``0.7``
 - Moved Python ``3.*`` minimum supported version from ``3.3`` to ``3.4``
 - Added conda recipe
 - Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
 - Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbid ``md5`` for security reason
 - Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
 - Make sure MKL uses GNU OpenMP

   - **NB**: Matrix dot product (``gemm``) with ``mkl`` from conda
     could return wrong results in some cases. We have reported the problem upstream
     and we have a work around that raises an error with information about how to fix it.

 - Improved elemwise operations

   - Speed-up elemwise ops based on SciPy
   - Fixed memory leaks related to elemwise ops on GPU

 - Scan improvements

   - Speed up Theano scan compilation and gradient computation
   - Added meaningful message when missing inputs to scan

 - Speed up graph toposort algorithm
 - Faster C compilation by massively using a new interface for op params
 - Faster optimization step, with new optional destroy handler
 - Documentation updated and more complete

   - Added documentation for RNNBlock
   - Updated ``conv`` documentation

 - Support more debuggers for ``PdbBreakpoint``
 - Many bug fixes, crash fixes and warning improvements

A total of 71 people contributed to this release since 0.9.0, see list below.

Interface changes:
 - Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
   and ``AllocDiag`` (set a vector as a diagonal of an empty array)
 - Removed op ``ExtractDiag`` from ``theano.tensor.nlinalg``, now only in ``theano.tensor.basic``
 - Generalized ``AllocDiag`` for any non-scalar input
 - Added new parameter ``target`` for MRG functions
 - Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
 - Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient

 - Removed or deprecated Theano flags:

   - ``cublas.lib``
   - ``cuda.enabled``
   - ``enable_initial_driver_test``
   - ``gpuarray.sync``
   - ``home``
   - ``lib.cnmem``
   - ``nvcc.*`` flags
   - ``pycuda.init``

Convolution updates:
 - Implemented separable convolutions for 2D and 3D
 - Implemented grouped convolutions for 2D and 3D
 - Added dilated causal convolutions for 2D
 - Added unshared convolutions
 - Implemented fractional bilinear upsampling
 - Removed old ``conv3d`` interface
 - Deprecated old ``conv2d`` interface

GPU:
 - Added a meta-optimizer to select the fastest GPU implementations for convolutions
 - Prevent GPU initialization when not required
 - Added disk caching option for kernels
 - Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
 - Added useful stats for GPU in profile mode
 - Added Cholesky op based on ``cusolver`` backend
 - Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
   SVD, matrix inverse, QR, cholesky and eigh
 - Added ``GpuCublasTriangularSolve``
 - Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
 - Support log gamma function for all non-complex types
 - Support GPU SoftMax in both OpenCL and CUDA
 - Support offset parameter ``k`` for ``GpuEye``
 - ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU

 - cuDNN:

   - Official support for ``v6.*`` and ``v7.*``
   - Added spatial transformation operation based on cuDNN
   - Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
   - Support cuDNN v7 tensor core operations for convolutions with runtime timed algorithms
   - Better support and loading on Windows and Mac
   - Support cuDNN v6 dilated convolutions
   - Support cuDNN v6 reductions for contiguous inputs
   - Optimized ``SUM(x^2)``, ``SUM(ABS(X))`` and ``MAX(ABS(X))`` operations with cuDNN reductions
   - Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
     to help configure Theano when CUDA and cuDNN can not be found automatically
   - Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
   - Disallowed ``float16`` precision for convolution gradients
   - Fixed memory alignment detection
   - Added profiling in C debug mode (with theano flag ``cmodule.debug=True``)
   - Added Python scripts to help test cuDNN convolutions
   - Automatic addition of cuDNN DLL path to ``PATH`` environment variable on Windows

 - Updated ``float16`` support

   - Added documentation for GPU float16 ops
   - Support ``float16`` for ``GpuGemmBatch``
   - Started to use ``float32`` precision for computations that don't support ``float16`` on GPU

New features:
 - Implemented truncated normal distribution with box-muller transform
 - Added ``L_op()`` overriding option for ``OpFromGraph``
 - Added NumPy C-API based fallback implementation for ``[sd]gemv_`` and ``[sd]dot_``
 - Implemented ``topk`` and ``argtopk`` on CPU and GPU
 - Implemented ``max()`` and ``min()`` functions for booleans and unsigned integers types
 - Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
 - Added boolean indexing for sub-tensors
 - Added covariance matrix function ``theano.tensor.cov``
 - Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
 - Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``
 - Added Scaled Exponential Linear Unit (SELU) activation
 - Added sigmoid_binary_crossentropy function
 - Added tri-gamma function
 - Added ``unravel_index`` and ``ravel_multi_index`` functions on CPU
 - Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
 - Implemented gradient for ``AbstractBatchNormTrainGrad``
 - Implemented gradient for matrix pseudoinverse op
 - Added new prop `replace` for ``ChoiceFromUniform`` op
 - Added new prop ``on_error`` for CPU ``Cholesky`` op
 - Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
   Currently used for subtensor Ops only.
 - Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
 - Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
 - Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
 - Added new Theano flag ``pickle_test_value`` to help disable pickling test values

Others:
 - Kept stack trace for optimizations in new GPU backend
 - Added deprecation warning for the softmax and logsoftmax vector case
 - Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
 - Added ``R_op()`` for ``ZeroGrad``
 - Added description for rnnblock

Other more detailed changes:
 - Fixed invalid casts and index overflows in ``theano.tensor.signal.pool``
 - Fixed gradient error for elemwise ``minimum`` and ``maximum`` when compared values are the same
 - Fixed gradient for ``ARange``
 - Removed ``ViewOp`` subclass during optimization
 - Removed useless warning when profile is manually disabled
 - Added tests for abstract conv
 - Added options for `disconnected_outputs` to Rop
 - Removed ``theano/compat/six.py``
 - Removed ``COp.get_op_params()``
 - Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
 - Macro names provided for array properties are now standardized in both CPU and GPU C codes
 - Moved all C code files into separate folder ``c_code`` in every Theano module
 - Many improvements for Travis CI tests (with better splitting for faster testing)
 - Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux

Commiters since 0.9.0:
 - Frederic Bastien
 - Steven Bocco
 - João Victor Tozatti Risso
 - Arnaud Bergeron
 - Mohammed Affan
 - amrithasuresh
 - Pascal Lamblin
 - Reyhane Askari
 - Alexander Matyasko
 - Shawn Tan
 - Simon Lefrancois
 - Adam Becker
 - Vikram
 - Gijs van Tulder
 - Faruk Ahmed
 - Thomas George
 - erakra
 - Andrei Costinescu
 - Boris Fomitchev
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - Tim Cooijmans
 - Anirudh Goyal
 - Saizheng Zhang
 - Yikang Shen
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Joseph Paul Cohen
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Dzmitry Bahdanau
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Nan Jiang
 - Xavier Bouthillier
 - fo40225
 - mrTsjolder
 - wyjw
 - Aarni Koskela
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Rebecca N. Palmer
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - naitonium


Theano 1.0.0rc1 (30th of October, 2017)
=======================================

This release contains new features, improvements and bug fixes to prepare the upcoming release.

We recommend that every developer updates to this version.

Highlights:
 - Make sure MKL uses GNU OpenMP

   - **NB**: Matrix dot product (``gemm``) with ``mkl`` from conda
     could return wrong results in some cases. We have reported the problem upstream
     and we have a work around that raises an error with information about how to fix it.

 - Optimized ``SUM(x^2)``, ``SUM(ABS(X))`` and ``MAX(ABS(X))`` operations with cuDNN reductions
 - Added Python scripts to help test cuDNN convolutions
 - Fixed invalid casts and index overflows in ``theano.tensor.signal.pool``

A total of 71 people contributed to this release since 0.9.0, see list below.

Commiters since 0.9.0:
 - Frederic Bastien
 - Steven Bocco
 - João Victor Tozatti Risso
 - Arnaud Bergeron
 - Mohammed Affan
 - amrithasuresh
 - Pascal Lamblin
 - Reyhane Askari
 - Alexander Matyasko
 - Shawn Tan
 - Simon Lefrancois
 - Adam Becker
 - Vikram
 - Gijs van Tulder
 - Faruk Ahmed
 - Thomas George
 - erakra
 - Andrei Costinescu
 - Boris Fomitchev
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - Tim Cooijmans
 - Anirudh Goyal
 - Saizheng Zhang
 - Yikang Shen
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Joseph Paul Cohen
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Dzmitry Bahdanau
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Nan Jiang
 - Xavier Bouthillier
 - fo40225
 - mrTsjolder
 - wyjw
 - Aarni Koskela
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Rebecca N. Palmer
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - naitonium


Theano 0.10.0beta4 (16th of October, 2017)
==========================================

This release contains new features, improvements and bug fixes to prepare the upcoming release candidate.

We recommend that every developer updates to this version.

Highlights:
 - Announcing that `MILA will stop developing Theano <https://groups.google.com/d/msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ>`_
 - Bug fixes, crash fixes, warning improvements and documentation updates

A total of 70 people contributed to this release since 0.9.0, see list below.

Interface changes:
 - Generalized ``AllocDiag`` for any non-scalar input

Convolution updates:
 - Implemented fractional bilinear upsampling

cuDNN (GPU):
 - Disallowed ``float16`` precision for convolution gradients
 - Fixed memory alignment detection
 - Added profiling in C debug mode (with theano flag ``cmodule.debug=True``)

New features:
 - Implemented truncated normal distribution with box-muller transform
 - Added ``L_op()`` overriding option for ``OpFromGraph``
 - Added NumPy C-API based fallback implementation for ``[sd]gemv_`` and ``[sd]dot_``

Other more detailed changes:
 - Improved stack trace follow-up for GPU optimizations
 - Fixed gradient error for elemwise ``minimum`` and ``maximum`` when compared values are the same
 - Fixed gradient for ``ARange``
 - Removed ``ViewOp`` subclass during optimization

Commiters since 0.9.0:
 - Frederic Bastien
 - João Victor Tozatti Risso
 - Arnaud Bergeron
 - Steven Bocco
 - Mohammed Affan
 - amrithasuresh
 - Pascal Lamblin
 - Reyhane Askari
 - Alexander Matyasko
 - Shawn Tan
 - Simon Lefrancois
 - Adam Becker
 - Vikram
 - Gijs van Tulder
 - Faruk Ahmed
 - Thomas George
 - erakra
 - Andrei Costinescu
 - Boris Fomitchev
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - Tim Cooijmans
 - Anirudh Goyal
 - Saizheng Zhang
 - Yikang Shen
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Joseph Paul Cohen
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Dzmitry Bahdanau
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Nan Jiang
 - Xavier Bouthillier
 - fo40225
 - mrTsjolder
 - wyjw
 - Aarni Koskela
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - naitonium


Theano 0.10.0beta3 (20th of September, 2017)
============================================

This release contains new features, improvements and bug fixes to prepare the upcoming release candidate.

We recommend that every developer updates to this version.

Highlights:
 - conda packages now available and updated in our own conda channel ``mila-udem``.
   To install it: ``conda install -c mila-udem -c mila-udem/label/pre theano pygpu``

 - Improved elemwise operations

   - Speed-up elemwise ops based on SciPy
   - Fixed memory leak related to elemwise ops on GPU

 - Fixed pygpu detection
 - Bug fixes, crash fixes, warning improvements and documentation updates

A total of 69 people contributed to this release since 0.9.0, see list below.

Interface changes:
 - Removed op ``ExtractDiag`` from ``theano.tensor.nlinalg``, now only in ``theano.tensor.basic``

Convolution updates:
 - Added dilated causal convolutions for 2D

New features:
 - Implemented ``topk`` and ``argtopk`` on CPU and GPU
 - Added ``unravel_index`` and ``ravel_multi_index`` functions on CPU
 - Implemented ``max()`` and ``min()`` functions for booleans and unsigned integers types

Others:
 - Added ``R_op()`` for ``ZeroGrad``
 - Added description for rnnblock

Commiters since 0.9.0:
 - Frederic Bastien
 - João Victor Tozatti Risso
 - Arnaud Bergeron
 - Steven Bocco
 - Mohammed Affan
 - amrithasuresh
 - Pascal Lamblin
 - Reyhane Askari
 - Alexander Matyasko
 - Simon Lefrancois
 - Adam Becker
 - Shawn Tan
 - Vikram
 - Gijs van Tulder
 - Thomas George
 - Andrei Costinescu
 - Faruk Ahmed
 - Boris Fomitchev
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - Tim Cooijmans
 - Anirudh Goyal
 - Saizheng Zhang
 - Yikang Shen
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - erakra
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Joseph Paul Cohen
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Dzmitry Bahdanau
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Nan Jiang
 - Xavier Bouthillier
 - fo40225
 - wyjw
 - Aarni Koskela
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - naitonium


Theano 0.10.0beta2 (7th of September, 2017)
===========================================

This release contains new features, improvements and bug fixes to prepare the upcoming release candidate.

We recommend that every developer updates to this version.

Highlights:
 - Support NumPy ``1.13``
 - Support pygpu ``0.7``
 - Added conda recipe
 - Optional faster optimization step with new destroy handler
 - Added documentation for RNNBlock
 - Bug fixes, crash fixes, warning improvements and documentation updates

A total of 67 people contributed to this release since 0.9.0, see list below.

Interface changes:
 - Added new parameter ``target`` for MRG functions

Convolution updates:
 - Added unshared convolutions
 - Added 3D separable convolutions
 - Added 3D grouped convolutions
 - Removed old ``conv3d`` interface
 - Deprecated old ``conv2d`` interface
 - Updated ``conv`` documentation

GPU:
 - Added a meta-optimizer to select the fastest GPU implementations for convolutions

 - cuDNN:

   - Official support for ``v6.*`` and ``v7.*``, support for ``v5.*`` will be removed in next release
   - Added spatial transformation operation based on cuDNN
   - Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
   - Support cuDNN v7 tensor core operations for convolutions with runtime timed algorithms
   - Restricted cuDNN reductions to contiguous inputs
   - Automatic addition of cuDNN DLL path to ``PATH`` environment variable on Windows

New features:
 - Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
 - Added boolean indexing for sub-tensors
 - Added covariance matrix function ``theano.tensor.cov``
 - Added new Theano flag ``pickle_test_value`` to help disable pickling test values

Others:
 - Kept stack trace for optimizations in new GPU backend

Other more detailed changes:
 - Moved all C code files into separate folder ``c_code`` in every Theano module
 - Improvements for Jenkins tests

Commiters since 0.9.0:
 - Frederic Bastien
 - João Victor Tozatti Risso
 - Arnaud Bergeron
 - Steven Bocco
 - Mohammed Affan
 - amrithasuresh
 - Pascal Lamblin
 - Reyhane Askari
 - Alexander Matyasko
 - Simon Lefrancois
 - Shawn Tan
 - Gijs van Tulder
 - Thomas George
 - Vikram
 - Andrei Costinescu
 - Faruk Ahmed
 - Boris Fomitchev
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - Tim Cooijmans
 - Anirudh Goyal
 - Saizheng Zhang
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - Yikang Shen
 - erakra
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Joseph Paul Cohen
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Dzmitry Bahdanau
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Xavier Bouthillier
 - fo40225
 - Aarni Koskela
 - Adam Becker
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - wyjw


Theano 0.10.0beta1 (9th of August, 2017)
========================================

This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate.

We recommend that every developer updates to this version.

Highlights:
 - Moved Python 3.* minimum supported version from 3.3 to 3.4
 - Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
 - Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
 - Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
 - Support more debuggers for ``PdbBreakpoint``

 - Scan improvements

   - Speed up Theano scan compilation and gradient computation
   - Added meaningful message when missing inputs to scan

 - Speed up graph toposort algorithm
 - Faster C compilation by massively using a new interface for op params
 - Faster optimization step
 - Documentation updated and more complete
 - Many bug fixes, crash fixes and warning improvements

A total of 65 people contributed to this release since 0.9.0, see list below.

Interface changes:
 - Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
   and ``AllocDiag`` (set a vector as a diagonal of an empty array)
 - Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``

 - Removed or deprecated Theano flags:

   - ``cublas.lib``
   - ``cuda.enabled``
   - ``enable_initial_driver_test``
   - ``gpuarray.sync``
   - ``home``
   - ``lib.cnmem``
   - ``nvcc.*`` flags
   - ``pycuda.init``

 - Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient

Convolution updates:
 - Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
 - Implemented separable convolutions
 - Implemented grouped convolutions

GPU:
 - Prevent GPU initialization when not required
 - Added disk caching option for kernels
 - Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
 - Added useful stats for GPU in profile mode
 - Added Cholesky op based on ``cusolver`` backend
 - Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
   SVD, matrix inverse, QR, cholesky and eigh
 - Added ``GpuCublasTriangularSolve``
 - Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
 - Support log gamma function for all non-complex types
 - Support GPU SoftMax in both OpenCL and CUDA
 - Support offset parameter ``k`` for ``GpuEye``
 - ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU

 - Better cuDNN support

   - Official support for ``v5.*`` and ``v6.*``
   - Better support and loading on Windows and Mac
   - Support cuDNN v6 dilated convolutions
   - Support cuDNN v6 reductions
   - Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
     to help configure Theano when CUDA and cuDNN can not be found automatically.

 - Updated ``float16`` support

   - Added documentation for GPU float16 ops
   - Support ``float16`` for ``GpuGemmBatch``
   - Started to use ``float32`` precision for computations that don't support ``float16`` on GPU

New features:
 - Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
 - Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
 - Added Scaled Exponential Linear Unit (SELU) activation
 - Added sigmoid_binary_crossentropy function
 - Added tri-gamma function
 - Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
 - Implemented gradient for ``AbstractBatchNormTrainGrad``
 - Implemented gradient for matrix pseudoinverse op
 - Added new prop `replace` for ``ChoiceFromUniform`` op
 - Added new prop ``on_error`` for CPU ``Cholesky`` op
 - Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
   Currently used for subtensor Ops only.
 - Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
 - Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
 - Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.

Others:
 - Added deprecation warning for the softmax and logsoftmax vector case
 - Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``

Other more detailed changes:
 - Removed useless warning when profile is manually disabled
 - Added tests for abstract conv
 - Added options for `disconnected_outputs` to Rop
 - Removed ``theano/compat/six.py``
 - Removed ``COp.get_op_params()``
 - Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
 - Macro names provided for array properties are now standardized in both CPU and GPU C codes
 - Started to move C code files into separate folder ``c_code`` in every Theano module
 - Many improvements for Travis CI tests (with better splitting for faster testing)
 - Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux

Commiters since 0.9.0:
 - Frederic Bastien
 - Arnaud Bergeron
 - amrithasuresh
 - João Victor Tozatti Risso
 - Steven Bocco
 - Pascal Lamblin
 - Mohammed Affan
 - Reyhane Askari
 - Alexander Matyasko
 - Simon Lefrancois
 - Shawn Tan
 - Thomas George
 - Faruk Ahmed
 - Zhouhan LIN
 - Aleksandar Botev
 - jhelie
 - xiaoqie
 - Tegan Maharaj
 - Matt Graham
 - Cesar Laurent
 - Gabe Schwartz
 - Juan Camilo Gamboa Higuera
 - AndroidCloud
 - Saizheng Zhang
 - vipulraheja
 - Florian Bordes
 - Sina Honari
 - Vikram
 - erakra
 - Chiheb Trabelsi
 - Shubh Vachher
 - Daren Eiri
 - Gijs van Tulder
 - Laurent Dinh
 - Mohamed Ishmael Diwan Belghazi
 - mila
 - Jeff Donahue
 - Ramana Subramanyam
 - Bogdan Budescu
 - Ghislain Antony Vaillant
 - Jan Schlüter
 - Xavier Bouthillier
 - fo40225
 - Aarni Koskela
 - Adam Becker
 - Adam Geitgey
 - Adrian Keet
 - Adrian Seyboldt
 - Andrei Costinescu
 - Anmol Sahoo
 - Chong Wu
 - Holger Kohr
 - Jayanth Koushik
 - Jenkins
 - Lilian Besson
 - Lv Tao
 - Michael Manukyan
 - Murugesh Marvel
 - NALEPA
 - Ubuntu
 - Zotov Yuriy
 - dareneiri
 - lrast
 - morrme
 - yikang


Theano 0.9.0 (20th of March, 2017)
==================================

This is a final release of Theano, version ``0.9.0``, with a lot of
new features, interface changes, improvements and bug fixes.

We recommend that everybody update to this version.

Highlights (since 0.8.0):
 - Better Python 3.5 support
 - Better numpy 1.12 support
 - Conda packages for Mac, Linux and Windows
 - Support newer Mac and Windows versions
 - More Windows integration:

   - Theano scripts (``theano-cache`` and ``theano-nose``) now works on Windows
   - Better support for Windows end-lines into C codes
   - Support for space in paths on Windows

 - Scan improvements:

   - More scan optimizations, with faster compilation and gradient computation
   - Support for checkpoint in scan (trade off between speed and memory usage, useful for long sequences)
   - Fixed broadcast checking in scan

 - Graphs improvements:

   - More numerical stability by default for some graphs
   - Better handling of corner cases for theano functions and graph optimizations
   - More graph optimizations with faster compilation and execution
   - smaller and more readable graph

 - New GPU back-end:

   - Removed warp-synchronous programming to get good results with newer CUDA drivers
   - More pooling support on GPU when cuDNN isn't available
   - Full support of ignore_border option for pooling
   - Inplace storage for shared variables
   - float16 storage
   - Using PCI bus ID of graphic cards for a better mapping between theano device number and nvidia-smi number
   - Fixed offset error in ``GpuIncSubtensor``

 - Less C code compilation
 - Added support for bool dtype
 - Updated and more complete documentation
 - Bug fixes related to merge optimizer and shape inference
 - Lot of other bug fixes, crashes fixes and warning improvements

A total of 123 people contributed to this release since 0.8.0, see list below.

Interface changes:
 - Merged ``CumsumOp/CumprodOp`` into ``CumOp``
 - In MRG module:

   - Replaced method ``multinomial_wo_replacement()`` with new method ``choice()``
   - Random generator now tries to infer the broadcast pattern of its output

 - New pooling interface
 - Pooling parameters can change at run time
 - Moved ``softsign`` out of sandbox to ``theano.tensor.nnet.softsign``
 - Using floatX dtype when converting empty list/tuple
 - ``Roll`` make the shift be modulo the size of the axis we roll on
 - ``round()`` default to the same as NumPy: half_to_even

Convolution updates:
 - Support of full and half modes for 2D and 3D convolutions including in ``conv3d2d``
 - Allowed pooling of empty batch
 - Implement ``conv2d_transpose`` convenience function
 - Multi-cores convolution and pooling on CPU
 - New abstract 3d convolution interface similar to the 2d convolution interface
 - Dilated convolution


GPU:
 - cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions
 - Multiple-GPU, synchrone update (via platoon, use NCCL)
 - Gemv(matrix-vector product) speed up for special shape
 - cublas gemv workaround when we reduce on an axis with a dimensions size of 0
 - Warn user that some cuDNN algorithms may produce unexpected results in certain environments
   for convolution backward filter operations
 - ``GPUMultinomialFromUniform`` op now supports multiple dtypes
 - Support for ``MaxAndArgMax`` for some axis combination
 - Support for solve (using cusolver), erfinv and erfcinv
 - Implemented ``GpuAdvancedSubtensor``

New features:
 - ``OpFromGraph`` now allows gradient overriding for every input
 - Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
 - Added gradient of solve, tensorinv (CPU), tensorsolve (CPU), searchsorted (CPU), DownsampleFactorMaxGradGrad (CPU)
 - Added Multinomial Without Replacement
 - Allowed partial evaluation of compiled function
 - More Rop support
 - Indexing support ellipsis: ``a[..., 3]```, ``a[1,...,3]``
 - Added ``theano.tensor.{tensor5,dtensor5, ...}``
 - compiledir_format support device
 - Added New Theano flag ``conv.assert_shape`` to check user-provided shapes at runtime (for debugging)
 - Added new Theano flag ``cmodule.age_thresh_use``
 - Added new Theano flag ``cuda.enabled``
 - Added new Theano flag ``nvcc.cudafe`` to enable faster compilation and import with old CUDA back-end
 - Added new Theano flag ``print_global_stats`` to print some global statistics (time spent) at the end
 - Added new Theano flag ``profiling.ignore_first_call``, useful to profile the new gpu back-end
 - remove ProfileMode (use Theano flag ``profile=True`` instead)


Others:
 - Split op now has C code for CPU and GPU
 - ``theano-cache list`` now includes compilation times
 - Speed up argmax only on GPU (without also needing the max)
 - More stack trace in error messages
 - Speed up cholesky grad
 - ``log(sum(exp(...)))`` now get stability optimized


Other more detailed changes:
 - Added Jenkins (gpu tests run on pull requests in addition to daily buildbot)
 - Removed old benchmark directory and other old files not used anymore
 - Use of 64-bit indexing in sparse ops to allow matrix with more then 2\ :sup:`31`\ -1 elements
 - Allowed more then one output to be an destructive inplace
 - More support of negative axis
 - Added the keepdims parameter to the norm function
 - Make scan gradient more deterministic

Commiters since 0.8.0:
 - Frederic Bastien
 - Arnaud Bergeron
 - Pascal Lamblin
 - Steven Bocco
 - Ramana Subramanyam
 - Simon Lefrancois
 - Gijs van Tulder
 - Benjamin Scellier
 - khaotik
 - Chiheb Trabelsi
 - Chinnadhurai Sankar
 - Cesar Laurent
 - Reyhane Askari
 - Mohammad Pezeshki
 - Alexander Matyasko
 - Alexandre de Brebisson
 - Mathieu Germain
 - Nan Rosemary Ke
 - Pierre Luc Carrier
 - Olivier Mastropietro
 - Thomas George
 - Saizheng Zhang
 - Iulian Vlad Serban
 - Francesco Visin
 - Caglar
 - Faruk Ahmed
 - Harm de Vries
 - Samira Shabanian
 - Vincent Dumoulin
 - Nicolas Ballas
 - Jakub Sygnowski
 - Jan Schlüter
 - Samira Ebrahimi Kahou
 - Mikhail Korobov
 - Fei Wang
 - Kv Manohar
 - Jesse Livezey
 - Kelvin Xu
 - Matt Graham
 - Ruslana Makovetsky
 - Sina Honari
 - Bryn Keller
 - Ciyong Chen
 - Vitaliy Kurlin
 - Zhouhan LIN
 - Gokula Krishnan
 - Kumar Krishna Agrawal
 - Ozan Çağlayan
 - Vincent Michalski
 - affanv14
 - Amjad Almahairi
 - Ray Donnelly
 - Tim Cooijmans
 - happygds
 - mockingjamie
 - Christos Tsirigotis
 - Florian Bordes
 - Ilya Kulikov
 - RadhikaG
 - Taesup (TS) Kim
 - Ying Zhang
 - Anton Chechetka
 - Karthik Karanth
 - Kirill Bobyrev
 - Rebecca N. Palmer
 - Yang Zhang
 - Yaroslav Ganin
 - Jonas Degrave
 - Liwei Cai
 - Lucas Beyer
 - Michael Harradon
 - Morgan Stuart
 - Tim Gasper
 - Xavier Bouthillier
 - p
 - texot
 - Andrés Gottlieb
 - Ben Poole
 - Bhavishya Pohani
 - Carl Thomé
 - David Bau
 - Dimitar Dimitrov
 - Evelyn Mitchell
 - Fei Zhan
 - Fuchai
 - Fábio Perez
 - Gennadiy Tupitsin
 - Gilles Louppe
 - Greg Ciccarelli
 - He
 - Huan Zhang
 - Kaixhin
 - Kevin Keraudren
 - Maltimore
 - Marc-Alexandre Cote
 - Marco
 - Marius F. Killinger
 - Martin Drawitsch
 - Maxim Kochurov
 - Micah Bojrab
 - Neil
 - Nizar Assaf
 - Rithesh Kumar
 - Rizky Luthfianto
 - Robin Millette
 - Roman Ring
 - Sander Dieleman
 - Sebastin Santy
 - Shawn Tan
 - Wazeer Zulfikar
 - Wojciech Głogowski
 - Yann N. Dauphin
 - gw0 [http://gw.tnode.com/]
 - hexahedria
 - hsintone
 - jakirkham
 - joncrall
 - root
 - superantichrist
 - tillahoffmann
 - valtron
 - wazeerzulfikar
 - you-n-g


Theano 0.9.0rc4 (13th of March, 2017)
=====================================

This release extends the 0.9.0rc3 and announces the upcoming final release 0.9.

Highlights (since 0.9.0rc3):
 - Documentation updates
 - DebugMode fixes, cache cleanup fixes and other small fixes

 - New GPU back-end:

   - Fixed offset error in GpuIncSubtensor
   - Fixed indexing error in GpuAdvancedSubtensor for more than 2 dimensions

A total of 5 people contributed to this release since 0.9.0rc3 and 123 since 0.8.0, see the lists below.


Committers since 0.9.0rc3:
 - Frederic Bastien
 - Pascal Lamblin
 - Arnaud Bergeron
 - Cesar Laurent
 - Martin Drawitsch


Theano 0.9.0rc3 (6th of March, 2017)
====================================

This release extends the 0.9.0rc2 and announces the upcoming final release 0.9.

Highlights (since 0.9.0rc2):
 - Graph clean up and faster compilation
 - New Theano flag conv.assert_shape to check user-provided shapes at runtime (for debugging)
 - Fix overflow in pooling
 - Warn if taking softmax over broadcastable dimension
 - Removed old files not used anymore
 - Test fixes and crash fixes

 - New GPU back-end:

   - Removed warp-synchronous programming, to get good results with newer CUDA drivers

A total of 5 people contributed to this release since 0.9.0rc2 and 122 since 0.8.0, see the lists below.


Committers since 0.9.0rc2:
 - Frederic Bastien
 - Arnaud Bergeron
 - Pascal Lamblin
 - Florian Bordes
 - Jan Schlüter


Theano 0.9.0rc2 (27th of February, 2017)
========================================

This release extends the 0.9.0rc1 and announces the upcoming final release 0.9.

Highlights (since 0.9.0rc1):
 - Fixed dnn conv grad issues
 - Allowed pooling of empty batch
 - Use of 64-bit indexing in sparse ops to allow matrix with more then 2\ :sup:`31`\ -1 elements.
 - Removed old benchmark directory
 - Crash fixes, bug fixes, warnings improvements, and documentation update

A total of 9 people contributed to this release since 0.9.0rc1 and 121 since 0.8.0, see the lists below.


Committers since 0.9.0rc1:
 - Frederic Bastien
 - Pascal Lamblin
 - Steven Bocco
 - Simon Lefrancois
 - Lucas Beyer
 - Michael Harradon
 - Rebecca N. Palmer
 - David Bau
 - Micah Bojrab


Theano 0.9.0rc1 (20th of February, 2017)
========================================

This release extends the 0.9.0beta1 and announces the upcoming final release 0.9.

Highlights (since 0.9.0beta1):
 - Better integration of Theano+libgpuarray packages into conda distribution
 - Better handling of Windows end-lines into C codes
 - Better compatibility with NumPy 1.12
 - Faster scan optimizations
 - Fixed broadcast checking in scan
 - Bug fixes related to merge optimizer and shape inference
 - many other bug fixes and improvements
 - Updated documentation

 - New GPU back-end:

   - Value of a shared variable is now set inplace

A total of 26 people contributed to this release since 0.9.0beta1 and 117 since 0.8.0, see the list at the bottom.

Interface changes:
 - In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`

Convolution updates:
 - Implement conv2d_transpose convenience function

GPU:
 - GPUMultinomialFromUniform op now supports multiple dtypes

New features:
 - OpFromGraph now allows gradient overriding for every input
 - Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
 - Added new Theano flag cuda.enabled
 - Added new Theano flag print_global_stats to print some global statistics (time spent) at the end

Others:
 - Split op now has C code for CPU and GPU
 - "theano-cache list" now includes compilation times


Committers since 0.9.0beta1:
 - Frederic Bastien
 - Benjamin Scellier
 - khaotik
 - Steven Bocco
 - Arnaud Bergeron
 - Pascal Lamblin
 - Gijs van Tulder
 - Reyhane Askari
 - Chinnadhurai Sankar
 - Vincent Dumoulin
 - Alexander Matyasko
 - Cesar Laurent
 - Nicolas Ballas
 - affanv14
 - Faruk Ahmed
 - Anton Chechetka
 - Alexandre de Brebisson
 - Amjad Almahairi
 - Dimitar Dimitrov
 - Fuchai
 - Jan Schlüter
 - Jonas Degrave
 - Mathieu Germain
 - Rebecca N. Palmer
 - Simon Lefrancois
 - valtron


Theano 0.9.0beta1 (24th of January, 2017)
=========================================

This release contains a lot of bug fixes and improvements + new features, to prepare the upcoming release candidate.

Highlights:
 - Many computation and compilation speed up
 - More numerical stability by default for some graph
 - Jenkins (gpu tests run on PR in addition to daily buildbot)
 - Better handling of corner cases for theano functions and graph optimizations
 - More graph optimization (faster execution and smaller graph, so more readable)
 - Less c code compilation
 - Better Python 3.5 support
 - Better numpy 1.12 support
 - Support newer Mac and Windows version
 - Conda packages for Mac, Linux and Windows
 - Theano scripts now works on Windows
 - scan with checkpoint (trade off between speed and memory usage, useful for long sequences)
 - Added a bool dtype

 - New GPU back-end:

   - float16 storage
   - better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards
   - More pooling support on GPU when cuDNN isn't there
   - ignore_border=False is now implemented for pooling


A total of 111 people contributed to this release since 0.8.0, see the list at the bottom.


Interface changes:
 - New pooling interface
 - Pooling parameters can change at run time
 - When converting empty list/tuple, now we use floatX dtype
 - The MRG random generator now try to infer the broadcast pattern of its output
 - Move softsign out of sandbox to theano.tensor.nnet.softsign
 - Roll make the shift be modulo the size of the axis we roll on
 - Merge CumsumOp/CumprodOp into CumOp
 - round() default to the same as NumPy: half_to_even

Convolution updates:
 - Multi-cores convolution and pooling on CPU
 - New abstract 3d convolution interface similar to the 2d convolution interface
 - Dilated convolution

GPU:
 - cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions
 - Multiple-GPU, synchrone update (via platoon, use NCCL)
 - GpuAdvancedSubtensor in new back-end
 - Gemv(matrix-vector product) speed up for special shape
 - Support for MaxAndArgMax for some axis combination
 - Support for solve (using cusolver), erfinv and erfcinv
 - cublas gemv workaround when we reduce on an axis with a dimensions size of 0
 - Warn user that some cuDNN algorithms may produce unexpected results in certain environments
   for convolution backward filter operations

New features:
 - Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU)
 - Add Multinomial Without Replacement
 - conv3d2d support full and half mode (REMOVE?)
 - Add DownsampleFactorMaxGradGrad.grad
 - Allow partial evaluation of compiled function
 - More Rop support
 - Indexing support ellipsis: a[..., 3], a[1,...,3]
 - Added theano.tensor.{tensor5,dtensor5, ...}
 - compiledir_format support device
 - Added new Theano flag cmodule.age_thresh_use

Others:
 - Speed up argmax only on gpu (without also needing the max)
 - A few unfrequent bugfix
 - More stack trace in error message
 - Speed up cholesky grad
 - log(sum(exp(...))) now get stability optimized

Other more detailed changes:
 - Allow more then one output to be an destructive inplace
 - Add flag profiling.ignore_first_call, useful to profile the new gpu back-end
 - Doc/error message fixes/updates
 - More support of negative axis
 - Added the keepdims parameter to the norm function
 - Crash fixes
 - Make scan gradient more deterministic
 - Add support for space in path on Windows
 - remove ProfileMode (use Theano flag profile=True instead)


Committers since 0.8.0:
 - Frederic Bastien
 - Arnaud Bergeron
 - Pascal Lamblin
 - Ramana Subramanyam
 - Simon Lefrancois
 - Steven Bocco
 - Gijs van Tulder
 - Cesar Laurent
 - Chiheb Trabelsi
 - Chinnadhurai Sankar
 - Mohammad Pezeshki
 - Reyhane Askari
 - Alexander Matyasko
 - Alexandre de Brebisson
 - Nan Rosemary Ke
 - Pierre Luc Carrier
 - Mathieu Germain
 - Olivier Mastropietro
 - khaotik
 - Saizheng Zhang
 - Thomas George
 - Iulian Vlad Serban
 - Benjamin Scellier
 - Francesco Visin
 - Caglar
 - Harm de Vries
 - Samira Shabanian
 - Jakub Sygnowski
 - Samira Ebrahimi Kahou
 - Mikhail Korobov
 - Faruk Ahmed
 - Fei Wang
 - Jan Schlüter
 - Kv Manohar
 - Jesse Livezey
 - Kelvin Xu
 - Matt Graham
 - Ruslana Makovetsky
 - Sina Honari
 - Bryn Keller
 - Ciyong Chen
 - Nicolas Ballas
 - Vitaliy Kurlin
 - Zhouhan LIN
 - Gokula Krishnan
 - Kumar Krishna Agrawal
 - Ozan Çağlayan
 - Vincent Michalski
 - Ray Donnelly
 - Tim Cooijmans
 - Vincent Dumoulin
 - happygds
 - mockingjamie
 - Amjad Almahairi
 - Christos Tsirigotis
 - Ilya Kulikov
 - RadhikaG
 - Taesup (TS) Kim
 - Ying Zhang
 - Karthik Karanth
 - Kirill Bobyrev
 - Yang Zhang
 - Yaroslav Ganin
 - Liwei Cai
 - Morgan Stuart
 - Tim Gasper
 - Xavier Bouthillier
 - p
 - texot
 - Andrés Gottlieb
 - Ben Poole
 - Bhavishya Pohani
 - Carl Thomé
 - Evelyn Mitchell
 - Fei Zhan
 - Fábio Perez
 - Gennadiy Tupitsin
 - Gilles Louppe
 - Greg Ciccarelli
 - He
 - Huan Zhang
 - Jonas Degrave
 - Kaixhin
 - Kevin Keraudren
 - Maltimore
 - Marc-Alexandre Cote
 - Marco
 - Marius F. Killinger
 - Maxim Kochurov
 - Neil
 - Nizar Assaf
 - Rithesh Kumar
 - Rizky Luthfianto
 - Robin Millette
 - Roman Ring
 - Sander Dieleman
 - Sebastin Santy
 - Shawn Tan
 - Wazeer Zulfikar
 - Wojciech Głogowski
 - Yann N. Dauphin
 - gw0 [http://gw.tnode.com/]
 - hexahedria
 - hsintone
 - jakirkham
 - joncrall
 - root
 - superantichrist
 - tillahoffmann
 - wazeerzulfikar
 - you-n-g


Theano 0.8.2 (21th of April, 2016)
==================================

This is a point release with only the support for cudnn v5 convolution
and minor fixes.

Highlights:
- cuDNN v5 convolution support (cuDNN v3 isn't supported anymore)
- A few crash fixes


Theano 0.8.1 (29th of March, 2016)
==================================

This is a point release without any new feature.

It fixes compilation issues on MacOS X with the command line tools for
XCode 7.3, which was released shortly after Theano 0.8.0.


Theano 0.8 (21th of March, 2016)
================================

We recommend that everybody update to this version.

Highlights:
 - Python 2 and 3 support with the same code base
 - Faster optimization
 - Integration of cuDNN for better GPU performance
 - Many Scan improvements (execution speed up, ...)
 - optimizer=fast_compile moves computation to the GPU.
 - Better convolution on CPU and GPU. (CorrMM, cudnn, 3d conv, more parameter)
 - Interactive visualization of graphs with d3viz
 - cnmem (better memory management on GPU)
 - BreakpointOp
 - Multi-GPU for data parallism via Platoon (https://github.com/mila-udem/platoon/)
 - More pooling parameter supported
 - Bilinear interpolation of images
 - New GPU back-end:

   * Float16 new back-end (need cuda 7.5)
   * Multi dtypes
   * Multi-GPU support in the same process


A total of 141 people contributed to this release, see the list at the bottom.


Installation:
 - Better BLAS detection
 - Fixes for more recent software and OS versions
 - Support Anaconda on Windows

Bug fixes:
 - GpuJoin now supports negative axis
 - Fix GpuCumsum for negative axis

Interface Deprecation (a warning is printed):
 - Deprecate Param class, use In instead

Interface Changes:
 - Rename DownsampleFactorMax to Pool.
 - tensor.stack now uses the same interface as numpy.stack
 - optimizer=fast_compile moves computation to the GPU
 - Raise the user stack trace more frequently.
 - Change dev version numbering to follow the PEP 440


New Interface (reuses existing functionality):
 - theano.tensor.nnet.relu
 - theano.tensor.nnet.elu
 - BatchNormalization.
 - MaxAndArgmax support axis=None
 - Add theano.tensor.compress (equivalent of numpy.compress)
 - theano.tensor.signal.downsamples.max_pool_2d_same_size
 - COp
 - __props__

New features
 - tensor.unique
 - map_variables
 - erfcx
 - mgrid, ogrid
 - allclose
 - BreakpointOp
 - Make bincount work on GPU
 - SolveOp on GPU
 - Optional optimization remove_all_assert
 - AllocEmpty
 - LogSoftmax, for stability optimization when the crossentropy optimization does not apply.
 - theano.tensor.repeat works on GPU
 - BatchedDot on the GPU and faster on the CPU.
 - Faster batched_tensordot and make it work on GPU.
 - SoftmaxGrad grad
 - 3d conv via CorrMM on the GPU
 - CPU Max Pool support of padding and strides!=windows size
 - theano.function() now accepts a dict for the outputs. When doing this, the function will return a dict. Helpful to keep track of which output is what.
 - Warn for unknown or misspelled theano config variables
 - theano.tensor.tile update (accept symbolic reps, work on GPU)
 - scan how have a strict flag. If set to True, this make scan building faster and could make execution faster.
 - theano.tensor.signal.conv2d(2d,2d) output 2d answer
 - More convolution parameter supported
 - Bilinear interpolation of images


Speed-ups:
 - Faster SetSubtensor on the GPU.
 - Support more reduction pattern on the GPU.
 - More graph optimization
 - Faster graph optimization
 - GpuCrossentropySoftmaxArgmax1HotWithBias


Crash/no return fixes:
 - Fix crash in the assert op grad
 - Fix curand crash on Mac
 - Multiple Fix scan crashes
 - Finish to update all Op.grad() implementation to the new interface

Others:
 - Support ARM processor.
 - Better tests
 - Code clean up.
 - Doc updates
 - doctest and sphinx test in travis
 - More tests tagged as slow
 - Better same_shape implementation
 - More op with c code to lower overhead
 - Custom pickler for SharedVariable theano.misc.pkl_utils.{dump,load}
 - function_dump to help us reproduce user error during compilation
 - assert_no_cpu_op
 - pep8, flake8
 - Better error messages
 - On non-default modes, reduce the number of allocation when allow_gc=False
 - Better lock


Committers for this dev version only:
 - Frederic Bastien
 - Arnaud Bergeron
 - Pierre Luc Carrier
 - Iban Harlouchet
 - Pascal Lamblin
 - Chienli Ma
 - Tim Cooijmans
 - Nicolas Ballas
 - Amjad Almahairi
 - David Warde-Farley
 - Christof Angermueller
 - Ziye Fan
 - Caglar
 - Sina Honari
 - Roy Xue
 - hantek
 - Mohammad Pezeshki
 - Melanie Ducoffe
 - Alexandre de Brebisson
 - Harm de Vries
 - Samira Shabanian
 - Alex Lamb
 - Ramana.S
 - Francesco Visin
 - Saizheng Zhang
 - Ying Zhang
 - Jan Schlüter
 - Xavier Bouthillier
 - Bart van Merrienboer
 - Cesar Laurent
 - Iulian Vlad Serban
 - Li Yao
 - Sigurd Spieckermann
 - Dmitrii Serdiuk
 - Kelvin Xu
 - Sebastien Jean
 - Thomas Mesnard
 - Seon-Wook Park
 - Vincent Michalski
 - Dustin Webb
 - Mikhail Korobov
 - Orhan Firat
 - Olivier Mastropietro
 - Daniel Renshaw
 - Julien Rebetez
 - Peng Liu
 - Sean Lee
 - TimSalimans
 - Andre Holzner
 - Gijs van Tulder
 - Guillaume Alain
 - Julien Demouth
 - Markus Beissinger
 - Mehdi Mirza
 - Moslem Kazemi
 - Saxenauts
 - Søren Kaae Sønderby
 - sentient07
 - Anatoly Belikov
 - Diogo Moitinho de Almeida
 - Jakub Sygnowski
 - Kashif Rasul
 - Laurent Dinh
 - Rémy Léone
 - Taesup (TS) Kim
 - gw0 [http://gw.tnode.com/]
 - mronian
 - vesis84
 - Benni
 - Chiheb Trabelsi
 - JesseLivezey
 - Marius Killinger
 - Matt Graham
 - Matthew Willson
 - Piotr Frankowski
 - Stefan Krastanov
 - vdumoulin
 - Adithya Ganesh
 - Anish Shah
 - Balázs Hidasi
 - Colin Raffel
 - Cory Lorenz
 - Doug
 - Jesse Livezey
 - John Salvatier
 - John Zedlewski
 - Jonathan Ho
 - Kaixhin
 - Liang-Chi Hsieh
 - Lucas Beyer
 - Luke Metz
 - Marc-Alexandre Cote
 - Martin Arjovsky
 - Matthias Kümmerer
 - Sirisha Rambhatla
 - briancheung
 - cai-lw
 - ivdorelian
 - jan-matthis
 - jojolalpin
 - joncrall
 - peterjsadowski
 - scottsievert
 - Étienne Simon
 - A. Flaxman
 - AlOa
 - Albert Zeyer
 - Andrea
 - Andy Jiang
 - Balázs
 - Ben Poole
 - Brian Cheung
 - Christophe Van Gysel
 - Claude Coulombe
 - Clay McLeod
 - Dario Garcia
 - Jakob Lombacher
 - Joao Felipe Santos
 - John Arevalo
 - Jonas Degrave
 - Martin Thoma
 - Mathieu Germain
 - Matthew Koichi Grimes
 - Michael Eickenberg
 - Michael Opitz
 - Paul Hollensen
 - Prayag Verma
 - Saatvik Shah
 - Sergei Lebedev
 - Vik Kamath
 - Wei Ouyang
 - Wojciech Głogowski
 - Yi-Lin Juang
 - Yurii Shevchuk
 - Zach Dwiel
 - dan
 - eulerreich
 - jotterbach
 - rolf
 - theaverageguy
 - wuaalb


Theano 0.7 (26th of March, 2015)
================================
We recommand to everyone to upgrade to this version.

Highlights:
 * Integration of cuDNN for 2D convolutions and pooling on supported GPUs
 * Too many optimizations and new features to count
 * Various fixes and improvements to scan
 * Better support for GPU on Windows
 * On Mac OS X, clang is used by default
 * Many crash fixes
 * Some bug fixes as well


Theano 0.6 (December 3th, 2013)
===================================

We recommend that everybody update to this version.


Highlights (since 0.6rc5):
 * Last release with support for Python 2.4 and 2.5.
 * We will try to release more frequently.
 * Fix crash/installation problems.
 * Use less memory for conv3d2d.

0.6rc4 skipped for a technical reason.

Highlights (since 0.6rc3):
 * Python 3.3 compatibility with buildbot test for it.
 * Full advanced indexing support.
 * Better Windows 64 bit support.
 * New profiler.
 * Better error messages that help debugging.
 * Better support for newer NumPy versions (remove useless warning/crash).
 * Faster optimization/compilation for big graph.
 * Move in Theano the Conv3d2d implementation.
 * Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator.
 * Bug fixes.

Change from 0.6rc5:
 * Fix crash when specifing march in cxxflags Theano flag. (Frederic B., reported by FiReTiTi)
 * code cleanup (Jorg Bornschein)
 * Fix Canopy installation on windows when it was installed for all users: Raingo
 * Fix Theano tests due to a scipy change. (Frederic B.)
 * Work around bug introduced in scipy dev 0.14. (Frederic B.)
 * Fix Theano tests following bugfix in SciPy. (Frederic B., reported by Ziyuan Lin)
 * Add Theano flag cublas.lib (Misha Denil)
 * Make conv3d2d work more inplace (so less memory usage) (Frederic B., repoted by Jean-Philippe Ouellet)


Committers since 0.5:

Frederic Bastien
Pascal Lamblin
Ian Goodfellow
Olivier Delalleau
Razvan Pascanu
abalkin
Arnaud Bergeron
Nicolas Bouchard +
Jeremiah Lowin +
Matthew Rocklin
Eric Larsen +
James Bergstra
David Warde-Farley
John Salvatier +
Vivek Kulkarni +
Yann N. Dauphin
Ludwig Schmidt-Hackenberg +
Gabe Schwartz +
Rami Al-Rfou' +
Guillaume Desjardins
Caglar +
Sigurd Spieckermann +
Steven Pigeon +
Bogdan Budescu +
Jey Kottalam +
Mehdi Mirza +
Alexander Belopolsky +
Ethan Buchman +
Jason Yosinski
Nicolas Pinto +
Sina Honari +
Ben McCann +
Graham Taylor
Hani Almousli
Ilya Dyachenko +
Jan Schlüter +
Jorg Bornschein +
Micky Latowicki +
Yaroslav Halchenko +
Eric Hunsberger +
Amir Elaguizy +
Hannes Schulz +
Huy Nguyen +
Ilan Schnell +
Li Yao
Misha Denil +
Robert Kern +
Sebastian Berg +
Vincent Dumoulin +
Wei Li +
XterNalz +


A total of 51 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.


Theano 0.6rc5 (November 25th, 2013)
===================================

We recommend that everybody update to this version.

We plan to release 0.6 in one week if there is no problem introduced
with this release candidate.

Theano 0.6rc4 was skipped due to a problem with pypi

Highlights:
 * Python 3.3 compatibility with buildbot test for it.
 * Full advanced indexing support.
 * Better Windows 64 bit support.
 * New profiler.
 * Better error messages that help debugging.
 * Better support for newer NumPy versions (remove useless warning/crash).
 * Faster optimization/compilation for big graph.
 * Move in Theano the Conv3d2d implementation.
 * Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator.
 * Bug fixes.

Committers for this rc5 only:

Frederic Bastien
Pascal Lamblin
Arnaud Bergeron
abalkin
Olivier Delalleau
John Salvatier
Razvan Pascanu
Jeremiah Lowin
Ludwig Schmidt-Hackenberg +
Vivek Kulkarni
Matthew Rocklin
Gabe Schwartz
James Bergstra
Sigurd Spieckermann +
Bogdan Budescu +
Mehdi Mirza +
Nicolas Bouchard
Ethan Buchman +
Guillaume Desjardins
Ian Goodfellow
Jason Yosinski
Sina Honari +
Ben McCann +
David Warde-Farley
Ilya Dyachenko +
Jan Schluter +
Micky Latowicki +
Yaroslav Halchenko +
Alexander Belopolsky
Hannes Schulz +
Huy Nguyen +
Robert Kern +
Sebastian Berg +
Vincent Dumoulin +
Wei Li +
XterNalz +


A total of 36 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.

Installation:
 * Canopy support (direct link to MKL):
   * On Linux and Mac OSX (Frederic B., Robert Kern)
   * On Windows (Edward Shi, Frederic B.)

 * Anaconda instructions (Pascal L., Frederic B.)
 * Doc Ubuntu 13.04 (Frederic B.)
 * Better support of newer NumPy version(remove useless warning/crash) (Frederic B., Huy Nguyen)

Bug fixes:
 * Scan: if a scan node was cloned (by theano.clone) with different inputs, and if both the initial and the cloned nodes are used in the function being compiled, the value of the outputs of one would be replaced with the outputs of the other one. (Pascal L.)
 * Sparse: Disable the optimization that introduce the CSMGradC op as it doesn't work correctly with unsorted indices. (Frederic B.)
 * Mac: Fix wrong result of GpuDownsampleFactorMaxGrad on Mac OSX. (Pascal L.)
 * Mac: Auto-Detect and work around a bug in BLAS on MacOS X (Pascal L.)
 * Mac: Work around bug in MacOS X. If 2 compiled modules had the same name, the OS or Python was not always the right one even when we used the right handle to it. (Pascal L.)
   Use this hash in the Python module, and in %(nodename)s, so that different helper functions in the support code for different Ops will always have different names.
 * Sparse grad: Fix ConstructSparseFromList.infer_shape (Pascal L., reported by Rami Al-Rfou')
 * (introduced in the development version after 0.6rc3 release) (Frederic B.)
   Reduction that upcasts the input on no axis (ex: call theano.sum() on a scalar when the original dtype isn't float64 or
   [u]int64). It produced bad results as we did not upcasted the inputs in the code, we just copy them.
 * Fix some cases of theano.clone() when we get a replacement of x that is a function of x. (Razvan P., reported by Akio Takano)
 * Fix grad of Alloc when we unbroadcast the value and it isn't a scalar. (Frederic B., reported Ian G.)

   * In some cases (I think most cases), there was an exception raised in the theano.tensor.grad() method.
     But in theory, there could be bad shapes produced in the unbroadcasted dimensions.

Interface Deprecation (a warning is printed):
 * The mode ProfileMode is now deprecated, use the Theano flag profile=True to replace it.
 * New theano.sparse_grad() interface to get the sparse grad of a_tensor[an_int_vector]. (Frederic B.)
   This can speed up the sparse computations when a small fraction of a_tensor is taken.
   Deprecate the old interface for this. (Frederic B.)

Interface Changes:
 * Interface change subtensor and take are not in tensor.basic anymore. They were available from tensor.* and are still available from there. (Frederic B., Matthew Rocklin)
   * This lowers the basic.py size to 191k, so under 200k for github search.
 * Add -m32 or -m64 in the module cache key and add the python bitwidth in the compiledir path. (Pascal L.)
 * mrg.normal now has the parameter size mandatory. It was crashing with the default value of None. (Olivier D.)
 * Remove the deprecated passing of multiple modes to theano function. (Frederic B.)
 * Change FunctionGraph Features interface of the {on_prune(),on_import()} call back to take a reason. (Frederic B.)
 * FunctionGraph now clone the input graph by default. (Frederic B.)
   * Added a parameter to optionally not do this cloning.
   * This was needed to speed up compilation

New Interface (reuses existing functionality):
 * Add hostname as a var in compiledir_format (Frederic B.)
 * Add a new Theano flag: compute_test_value_opt. It takes the same values as compute_test_value. It enables compute_test_value during Theano optimization. Only useful to debug Theano optimization. Also small changes to some optimization to work correctly in that setup. (Frederic B.)
 * Add the value pdb to the Theano flag: compute_test_value and compute_test_value_opt. (Frederic B.)
 * Add the Theano flag: optimizer_verbose. Default False. When True, we print all the optimization being applied.(Frederic B.)
 * Add Op.c_init_code() to allow running the code when the c cmodule is imported (Pascal L.)
 * Allow theano.tensor.ones(3) to support scalar and not just list of scalar as numpy.ones (Jeremiah Lowin)
 * Make the memory profiler print the FLOPS used for the ops that know how to compute it. (Frederic B.)

New Features:
 * Make tensor.{constant,as_tensor_variable} work with memmap. (Christian Hudon, Frederic Bastien)
 * compilation work on ARM processor (Raspberry Pi, Vincent Dumoulin)
 * Add numpy.random.choice wrapper to our random number generator (Sigurd Spieckermann)
 * Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator (Matthew Rocklin)
 * Move in Theano the Conv3d2d implementation (James Bergstra, Frederic B., Pascal L.)
 * First version of the new GPU back-end available (Arnaud Bergeron, Frederic B.)

   * Not all Ops have been converted to this new back-end.
     To use, use Theano flag device=cudaN or device=openclN, where N is a integer.
 * Python 3.3 compatible (abalkin, Gabe Schwartz, Frederic B., Pascal L.)
 * A new profiler (Frederic B.)
   The new profiler now can profile the memory with the Theano flag profile_memory=True.
   The ProfileMode now can't profile memory anymore and prints a message about it.
   Now we raise an error if we try to profile when the gpu is enabled if we didn't set
   correctly the env variable to force the driver to sync the kernel launch.
   Otherwise the profile information are useless.
   The new profiler supports the enabling/disabling of the garbage collection.
 * Adds tensor.tri, tensor.triu, and tensor.tril functions that wrap Numpy equivalents (Jeremiah Lowin)
 * Adds tensor.nonzero, tensor.flatnonzero functions that wrap Numpy equivalents (Jeremiah Lowin)
 * Adds tensor.nonzero_values to get around lack of advanced indexing for nonzero elements (Jeremiah Lowin)
 * Make {inc,set}_subtensor work on output of take. (Pascal L.)
 * When device=cpu and force_device=True, force that we disable the gpu. (Frederic B.)
 * Better Windows 64 bit support for indexing/reshaping (Pascal L.)
 * Full advanced indexing support (John Salvatier, seberg)
 * Add theano.tensor.stacklist(). Recursivly stack lists of tensors to maintain similar structure (Matthew R.)
 * Add Theano flag value: on_opt_error=pdb (Olivier D.)
 * GpuSoftmax[WithBias] for bigger row. (Frederic B.)
 * Make Erfinv work on the GPU (Guillaume Desjardin, Pascal L.)
 * Add "theano-cache basecompiledir purge" (Pascal L.)
   This purges all the compiledirs that are in the base compiledir.
 * A_tensor_variable.zeros_like() now supports the dtype parameter (Pascal L.)
 * More stable reduce operations by default (Pascal L.)
   Add an accumulator dtype to CAReduceDtype (acc_dtype)
   by default, acc_dtype is float64 for float32 inputs,
   then cast to specified output dtype (float32 for float32 inputs)
 * Test default blas flag before using it (Pascal L.)
   This makes it work correctly by default if no blas library is installed.
 * Add cuda.unuse() to help tests that need to enable/disable the GPU (Frederic B.)
 * Add theano.tensor.nnet.ultra_fast_sigmoid and the opt (disabled by default) local_ultra_fast_sigmoid. (Frederic B.)
 * Add theano.tensor.nnet.hard_sigmoid and the opt (disabled by default) local_hard_sigmoid. (Frederic B.)
 * Add class theano.compat.python2x.Counter() (Mehdi Mirza)
 * Allow a_cuda_ndarray += another_cuda_ndarray for 6d tensor. (Frederic B.)
 * Make the op ExtractDiag work on the GPU. (Frederic B.)
 * New op theano.tensor.chi2sf (Ethan Buchman)
 * Lift Flatten/Reshape toward input on unary elemwise. (Frederic B.)
   This makes the "log(1-sigmoid) -> softplus" stability optimization being applied with a flatten/reshape in the middle.
 * Make MonitorMode use the default optimizers config and allow it to change used optimizers (Frederic B.)
 * Add support for ScalarOp.c_support_code in GpuElemwise. (Frederic B.)
 * Also make the Psi function run on GPU. (Frederic B.)
 * Make tensor.outer(x,y) work when ndim != 1 as numpy.outer.
 * Kron op: Speed up/generalize/GPU friendly. (Frederic B.)
   (It is not an op anymore, but reuses current op)
 * Add gpu max for pattern (0, 1) and added all gpu max pattern for gpu min. (Frederic B.)
 * Add GpuEye (Frederic B.)
 * Make GpuCrossentropySoftmaxArgmax1HotWithBias and GpuCrossentropySoftmax1HotWithBiasDx work for bigger inputs (Frederic B., reported by Ryan Price)
 * Finish and move out of sandbox theano.sparse.basic.true_dot (Nicolas Bouchard, Frederic B.)
   And document all sparse dot variants.
 * Implement the mode ignore_borders for GpuImages2Neibs (Frederic B.)
 * Make many reduction functions accept a numpy scalar as axis (Jeremiah Lowin)
 * Allow numpy.asarray(cuda_ndarray, dtype=...) (Frederic B.)
 * theano-cache cleanup now remove cached module old version of code. (Frederic B.)


Speed-ups:
 * Optimizer speed up. (Frederic B.)
 * Fix warning on newer llvm version on Mac. (Pascal L., reported by Jeremiah Lowin and Chris Fonnesbeck)
 * Allow pickling of more Ops to allow reusing the compiled code (Pascal L., Frederic B.)
 * Optimize more cases of dot22 and scalar when we can't make a gemm (Pascal L., Frederic B.)
 * Speed up GpuJoin with c code (Ludwig Schmidt-Hackenberg, Frederic B.)
 * Faster GpuAdvancedIncSubtensor1 on Fermi GPU (and up) on matrix. (Vivek Kulkarni)
 * Faster GPUAdvancedIncSubtensor1 in some cases on all GPU (Vivek Kulkarni)
 * Implemented c_code for AdvancedSubtensor1 (abalkin)
 * Add the equivalent of -march=native to g++ command line. (Frederic B., Pascal L.)
 * Speed up compilation with Scan (Jan Schluter)
 * Merge more Scan nodes together (Pascal L., Yao Li).
 * Add MakeVector.c_code (Frederic B.)
 * Add Shape.c_code (Frederic B.)
 * Optimize Elemwise when all the inputs are fortran (Frederic B.)
   We now generate a fortran output and use vectorisable code.
 * Add ScalarOp.c_code_contiguous interface and do a default version. (Frederic B.)
   This could optimize elemwise by helping the compiler generate SIMD instruction.
 * Use ScalarOp.c_code_contiguous with amdlibm. (Frederic B.)
   This speeds up exp, pow, sin, cos, log, log2, log10 and sigmoid when the input is contiguous in memory.
 * A fix that removes a local_setsubtensor_of_allocs optimization warning and enables it in that case. (Frederic B., reported by John Salvatier)
 * Make inv_as_solve optimization work (Matthew Rocklin)

Crash/no return fixes:
 * Fix scan crash in the grad of grad of a scan with special structure (including scan in a scan) (Razvan P., Bitton Tenessi)
 * Fix various crashes when calling scan() with inputs specified in unusual ways. (Pascal L.)
 * Fix shape crash inserted by Scan optimization. The gradient of some recursive scan was making the PushOutSeqScan optimization insert crash during the execution of a Theano function. (Frederic B., reported by Hugo Larochelle)
 * Fix command not returning with recent mingw64 on Windows (Pascal L., reported by many people)
 * Fix infinite loop related to Scan on the GPU. (Pascal L.)
 * Fix infinite loop when the compiledir is full. (Frederic B.)
 * Fix a shape cycle crash in the optimizer (Pascal L., Frederic B., reported by Cho KyungHyun)
 * Fix MRG normal() now allow it to generate scalars. (Pascal L.)
 * Fix some GPU compilation issue on Mac (John Yani, Frederic B.)
 * Fix crash when building symbolic random variables with a mix of symbolic and numeric scalar in the "size" parameter. (Pascal L., Reported by Wu Zhen Zhou)
 * Make some Op.grad() implementions not return None (Pascal L.)
 * Crash fix in the grad of elemwise about an DisconnectedType (Pascal L, reported by Thomas Wiecki)
 * Fix local_gpu_multinomial optimization handling of broadcast information. (Frederic B., reported by Caglar)
 * Fix crash with change introduced in NumPy 1.7.1 (Pascal L., reported by Thomas Wiecki)
 * Compilation failure with complex (Pascal L., reported by autumncat)
 * Gpu reduction on all dimensions of a 4d tensor. (Frederic B., reported by Arjun Jain)
 * Fix crash for a combination of grad of dot and dimshuffle when only one of the inputs for a corresponding dimensions was knowing that it was broadcastable. (Frederic B., reported by Micky Latowicki)
 * AdvancedSubtensor1: allow broadcasted index vector. (Frederic B., reported by Jeremiah Lowin)
 * Fix compute_test_value for ifelse (Olivier D., reported by Bitton Tenessi)
 * Fix import error with some versions of NumPy (Olivier D.)
 * Fix Scan grad exception (Razvan P., reported by Nicolas BL)
 * Fix compute_test_value for a non_sequence when calling the gradient of Scan (Pascal L., reported by Bitton Tenessi).
 * Crash fix in Scan following interface change in 0.6rc2 (Razvan P.)
 * Crash fix on Scan (Razvan P.)
 * Crash fix on Scan (Pascal L., reported by Sina Honari and Sigurd)
 * Fix crash in Scan gradient related to compute_test_value (Frederic B., reported by Bitton Tenessi)
 * Fix a scan optimization warning/error depending of Theano flags (Frederic B.)
 * Fixed crash for unimplemented elemwise gradient (Olivier D., reported by Michael McNeil Forbes)
 * Fix crash in the elemwise python code for some big shape with power of 2. (Sina Honari, Pascal L.)
 * Fix compile and import errors on Windows including for the GPU. (Bogdan Budescu)
 * Fix GPU compilation on Windows (XterNalz)
 * Fix local_abs_merge optimization crash (Pascal L., reported by Jeremiah Lowin)
 * Fix import theano crash when g++ isn't there (Olivier D.)
 * Fix crash related to rebuild of Theano graph (Pascal L., reported by Divine Eguzouwa)
 * Fix crash during compilation (David Ward-Farley)
 * Crash fix in the grad of GPU op in corner case (Pascal L.)
 * Crash fix on MacOS X (Robert Kern)
 * theano.misc.gnumpy_utils.garray_to_cudandarray() set strides correctly for dimensions of 1. (Frederic B., reported by Justin Bayer)
 * Fix crash during optimization with consecutive sums and some combination of axis (Frederic B., reported by Caglar Gulcehre)
 * Fix crash with keepdims and negative axis (Frederic B., reported by David W.-F.)
 * Fix crash of theano.[sparse.]dot(x,y) when x or y is a vector. (Frederic B., reported by Zsolt Bitvai)
 * Fix opt crash/disabled with ifelse on the gpu (Frederic B, reported by Ryan Price)
 * Fix crash in optimization involving dot22, (Pascal L., reported by @micklat)
 * Prevent shape optimizations from introducing cycles in the graph (Frederic Bastien, Pascal Lamblin, reported by Kyunghyun Cho)

Others:
 * Update/Fixes/Typo/pep8 documentation and/or tutorial (Olivier D., David W.-F., Frederic B., Yaroslav Halchenko, Micky Latowicki, Ben McCann, Jason Yosinski, reported by Arnaud Bergeron)
 * Doc how to make a sparse Op. (Frederic B.)
 * Doc compatibility guide (abalkin)
 * Fix problem in remove_constants_and_unused_inputs_scan. (useless warning and maybe slow down) (Pascal L.)
 * Fix rop dot.(Razvan P., reported by Jeremiah Lowin)
 * Raise better error related to pydot bug. (Frederic B., reported by Jason Yosinski and Ludwig Schmidt-Hackenberg)
 * Fix to Theano tutorial examples. (reported by Ilya Dyachenko)
 * Fix SharedVar.value property to make it raise an exception (Frederic B., reported by Drew Duncan)
 * Fix verification with compute_test_value in grad() (Frederic B.)
 * Theano flags are now evaluated lazily, only if requested (Frederic B.)
 * Fix test when g++ is not avail (Frederic B.)
 * Add manual instructions for OpenBLAS on Ubuntu by (Jianri Li )
 * Better/more error messages (Frederic B., Pascal L., Ian Goodfellow)
 * Fix Error reporting with GpuConv (Frederic B., reported by Heng Luo and Nicolas Pinto)
 * Now travis-ci tests with scipy the parts that need it (Frederic B.)
 * Export some functions that work on CudaNdarray for windows (Frederic B.)
 * If the user specifies a -arch=sm_* value in the Theano flags for the gpu, don't add one (Frederic B., Pascal L.)
 * If a C thunk returns an error, check if a python exception is set. Otherwise, set a default one (Pascal L.)
 * Crash fix introduced in the development version (Wei LI)
 * Added BLAS benchmark result (Frederic B., Ben McCann)
 * Fix code comment (Hannes Schulz)
 * More stable tests (Frederic B.)
 * Add utt.asset_allclose(a, b) to have better error message. (Frederic B.)
 * Better error message with compute_test_value (Frederic, reported by John Salvatier)
 * Stochastic order behavior fix (Frederic B.)

 * Simpler initial graph for subtensor infer shape (Olivier D.)
   The optimization was doing the optimization, but this allows better reading of the graph before optimization.
 * Better detection of non-aligned ndarray (Frederic B.)
 * Update MRG multinomial gradient to the new interface (Mehdi Mirza)
 * Implement Image2Neibs.perform() to help debug (Frederic B.)
 * Remove some Theano flags from the compilation key (Frederic B.)
 * Make theano-nose work on executable '\*.py' files. (Alistair Muldal)
 * Make theano-nose work with older nose version (Frederic B.)
 * Add extra debug info in verify_grad() (Frederic B.)


Theano 0.6rc3 (February 14th, 2013)
===================================

Highlights:
 * Windows related fixes.
 * Speed-ups.
 * Crash fixes.
 * A few small interface changes.
 * GPU memory leak fix.
 * A few corner cases fixes without incidence.
 * More Theano determinism
 * tensor.{dot,tensordot} more complete/faster/GPU friendly.
 * tensor.tensordot now support Rop/Lop
 * tensor.dot support n-dimensional inputs as NumPy
 * To support more NumPy syntax:
     * Add theano.tensor.take()
     * Add a_tensor_variable.{sort,dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag,take}

Commiters for this rc3 only:
Frederic Bastien
Ian Goodfellow
Pascal Lamblin
Jeremiah Lowin
abalkin
Olivier Delalleau
Razvan Pascanu
Rami Al-Rfou'
Vivek Kulkarni
Guillaume Desjardins
David Warde-Farley
Eric Hunsberger
Amir Elaguizy
James Bergstra

Bug fix:
 * Fix memory leak on the GPU in some corner cases with the Theano flags `allow_gc=False`. (Frederic B., reported by Jonas Gehring)
 * Fix copy of random state between graph. (Guillaume D.)
   http://deeplearning.net/software/theano/tutorial/examples.html#copying-random-state-between-theano-graphs
 * Fix wrong dtype in sandbox.linalg.ExtractDiag with shape of 0. (Frederic B., reported by abalkin)
 * Correctly support array with more then 2*10e32 element in AdvancedSubtensor1. (Abalkin)
 * Fix wrong broadcast dimensions of output of Repeat op. (Abalkin)
   We where using the inputs broadcasting pattern in some cases when we shouldn't.
 * Fix theano.sandbox.linalg.eigh grad that didn't always returned the right dtype. (Frederic B., Olivier D.)

New Features:
 * More Theano determinism (Ian G., Olivier D., Pascal L.)
     * Add and use a new class OrderedSet.
     * theano.grad is now deterministic.
     * Warn when the user uses a (non ordered) dictionary and this causes non-determinism in Theano.
     * The Updates class was non-deterministic; replaced it with the OrderedUpdates class.
 * tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
   This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise ops that are used.
 * tensor.dot support n-dimensional inputs as NumPy (Jeremiah Lowin)
   Work on the GPU too.
 * The Theano flag `nvcc.flags` now accept `-ftz=true`, `--prec-div=false` and `--prec=sqrt=false` as value. (Frederic B.)
   To enable all of them, use the Theano flag `nvcc.flags=--use_fast_math`.
 * New op theano.sparse.ConstructSparseFromList (Rami Al-Rfou'  Vivek Kulkarni)
 * Make Theano work with Anaconda on Windows. (Pascal L.)
 * Add tensor_var.diagonal and theano.tensor.{diag,diagonal}. (abalkin)
 * AdvencedSubtensor1 can now have a sparse gradient. (Rami Al-Rfou', Vivek Kulkarni)
 * Implemented GpuContiguous.grad. (Ian G.)

Interface Deprecation (a warning is printed):
 * theano.misc.strutil.renderString -> render_string (Ian G.)
 * Print a warning when using dictionary and this makes Theano non-deterministic.

Interface Change:
 * Raise an error when theano.shared called with a theano variable. (Frederic B.)
 * Don't print warning for bug before Theano 0.5 by default. (Frederic B.)
 * Theano functions now always have a field name, default to None. (Frederic B.)
 * Theano function fct.fgraph have a copy of the Theano function name field. (Ian G.)
   This is needed to allow the fgraph to know it.
 * In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. (Ian G.)
   We returned the mathematical right answer 0 in those cases.
 * get_constant_value() renamed get_scalar_constant_value() and raise a new exception tensor.basic.NotScalarConstantError. (Ian G.)
 * theano.function raises an error when trying to replace inputs with the 'given' parameter. (Olivier D.)
   This was doing nothing, the error message explains what the user probably wants to do.

New Interface (reuse existing functionality):
 * tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin)
   We where already doing this for argsort.
 * Add theano.tensor.take() and a_tensor_var.take() to support NumPy syntax. (abalkin)
 * Add a_tensor_variable.{dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag}. (abalkin)

New debug feature:
 * DebugMode print more info when there is an error. (Frederic B.)
 * Better profiling of test time with `theano-nose --time-profile`. (Frederic B.)
 * Detection of infinite loop with global optimizer. (Pascal L.)
 * DebugMode.check_preallocated_output now also work on Theano function output. (Pascal L.)
 * DebugMode will now complain when the strides of CudaNdarray of dimensions of 1 are not 0. (Frederic B.)

Speed-ups:
 * c_code for SpecifyShape op. (Frederic B.)
 * cross-entropy optimization now work when specify_shape is used. (Pascal L.)
 * The Scan optimization ScanSaveMem and PushOutDot1 applied more frequently. (Razvan P, reported Abalkin)
   A skipped optimization warning was printed.
 * dot(vector, vector) now faster with some BLAS implementation. (Eric Hunsberger)
   OpenBLAS and possibly others didn't call {s,d}dot internally when we called {s,d}gemv.
   MKL was doing this.
 * Compilation speed up: Take the compiledir lock only for op that generate c_code. (Frederic B)
 * More scan optimization (Razvan P.)
     * Opt to make RNN fast in Theano.
     * Optimize some case of dot, by moving them outside of Scan.
     * Move some sequences outside of scan too.
     * Merge more scan inputs, mostly byproduct of other Scan optimizations.
 * c_code for theano.sparse.AddSD. (Rami Al-Rfou',  Vivek Kulkarni)

Crash Fixes:
 * Fix crash about dimshuffle. (abalkin)
 * Fix crash at compilation. (Olivier D.)
 * Fix openmp detection. (Pascal L.)
   Resulted in a crash with EPD on Windows.
 * Fix for new BLAS interface in SciPy. (Olivier D.)
   Fix crash with some development version of SciPy.
 * GpuSum work with bigger shape when summing on the first dim on 3d tensor. (Frederic B., reported Chris Currivan)
 * Windows compilation crash fix. (Frederic B.)
 * Make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype. (Frederic B., reported by Mark Fenner)
 * Fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285. (Frederic B.)
 * Fix crash due to a race condition when importing theano. (Ian G.)
 * Fix crash from path problem with `theano-nose --batch`. (Abalkin)
 * Fix crash with tensor.roll(Var, iscalar). (Frederic B., reported by Jeremiah Lowin)
 * Fix compilation crash with llvm on Mac. (Abalkin)
 * Fix the grad of Scan that told wrongly that there is no connection between cost and parameters. (Razvan P.)
 * The infer shape mechanism now force that broadcasted dimensions have a shape know to be equivalent to one during compilation.
   Sometimes, we where not able knowing this before run time and resulted in crash. (Frederic B.)
 * Fix compilation problems on GPU on Windows. (Frederic B.)
 * Fix copy on the GPU with big shape for 4d tensor (Pascal L.)
 * GpuSubtensor didn't set the stride to 0 for dimensions of 1. This could lead to check failing later that caused a crash. (Frederic B., reported by vmichals)

Theoretical bugfix (bug that won't happen with current Theano code, but if you messed with the internal, could have affected you):
 * GpuContiguous, GpuAlloc, GpuDownSampleGrad, Conv2d now check the preallocated outputs strides before using it. (Pascal L.)
 * GpuDownSample, GpuDownSampleGrad didn't work correctly with negative strides in their output due to problem with nvcc (Pascal L, reported by abalkin?)

Others:
 * Fix race condition when determining if g++ is available. (Abalkin)
 * Documentation improvements. (Many people including David W-F, abalkin, Amir Elaguizy, Olivier D., Frederic B.)
 * The current GPU back-end have a new function CudaNdarray_prep_output(CudaNdarray ** arr, int nd, const int * dims) (Ian G)


Theano 0.6rc2 (November 21th, 2012)
===================================

Highlights:
 * Fix for a few regressions introduced in 0.6rc1.
 * A few new features.
 * Speed-ups.
 * Scan fixes.
 * Crash fixes.
 * A few small interface changes.

Commiters for this rc2 only:
Razvan Pascanu
Pascal Lamblin
Frederic Bastien
Ian Goodfellow
Jeremiah Lowin
Caglar Gulcehre
Jey Kottalam
Matthew Rocklin
abalkin


Regressions in 0.6rc1 fixed:
 * Fixed the scan gradient dtype issue. In 0.6rc1, some upcast were inserted. (Razvan P.)
 * Now grad() will do as before 0.6rc1 for float, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)

Wrong results fixes:
 * Scan fix in some case didn't returned the good results. (Razvan P., reported by Jeremiah L.)
   This happened if you had a state with only neg tap and the output of the state was a function of some sequence.
   If you had multiple states, there was no problem.
 * Fixed bug in Scan with multiple outputs,
   where one output would sometimes overwrite another one. (Razvan P.)
 * Clip.grad treated the gradient with respect to the clipping boundary as always 0. (Ian G.)

Interface changes:
 * We do not support anymore unaligned ndarray in Python code. (Frederic B.)
   We did not support it in C code and supporting it in Python code made
   the detection harder.
 * Now we only officially support SciPy 0.7.2 and NumPy 1.5.0 (Frederic B.)
   We weren't and aren't testing with older versions.
 * The theano.sparse.SparseType is available even when SciPy is not (Frederic B.)
 * Fixed issue where members of consider_constant grad parameter
   were treated differently from Constant variables. (Ian G.)
 * Removed the parameter g_cost from theano.grad(). (Ian G.)
   Use the new more powerful parameter known_grads instead.

NumPy interface support:
 * theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
 * TensorVariable objects now have dot, argmin, argmax, clip, conj, repeat, trace, std, round,
   ravel and argsort functions and the real and imag properties as numpy.ndarray objects.
   The functionality was already available in Theano. (abalkin)

Speed-ups:
 * A C version of the SoftMax op (Razvan P.)
   There was C code for the softmax with bias code.
 * Faster GpuIncSubtensor (Ian G.)
 * Faster copy on the GPU for 4d tensor. (Ian G.)
 * The fix of flatten infer_shape re-enables an optimization (Pascal L.)
   * The bug was introduced in 0.6rc1.
 * Enable inc_subtensor on the GPU when updating it with a float64 dtype. (Ian G.)
   It was causing an optimization warning.
 * Make DeepCopy reuse preallocated memory. (Frederic B.)
 * Move the convolution to the GPU when the image shape and logical image shape differ. (Frederic Bastien)
 * C code for the View Op (Razvan P., Pascal L.)

New Features:
 * Added a monitoring mode "MonitorMode" as a debugging tool. (Olivier D.)
 * Allow integer axes when keepdims==True (Jeremiah Lowin)
 * Added erfinv and erfcinv op. (Jey Kottalam)
 * Added tensor.batched_dot(). (Caglar Gulcehre)
   It uses scan behind the scenes, but makes doing this easier.
 * theano.get_constant_value(x) (Frederic B.)
   This tries to have x as a constant int.
   This does some constant folding to try to convert x into an int.
   Used by some optimizations.
 * Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin)
   Theano does not automatically use them. It is up to you to use them and split your computations.
 * Added theano.sandbox.linalg.eig (abalkin)
 * Started some support for Python3 (abalkin)
   setup.py supports python3 now.
   It calls 2to3 during the setup.
   Python3 is not fully supported as we didn't update the C code.


Crash Fixes:
 * Fix a crash related to scan.grad due to the new mechanism. (Ian G.)
 * Fix an optimization warning. Now it gets optimized. (Frederic B.)
 * Fix crash introduced in 0.6rc1 in theano.grad (Ian G.)
 * Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.)
 * Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.)
   Also implement the gradient on the min/max bound.
 * Fix crash in the grad of tensor.switch for int (Ian G.)
 * Fix crash when mixing shared variable on the GPU and sparse dot. (Pascal L.)
 * Fix crash as sometimes sparse.dot would return a different dtype number
   that is equivalent but not the one expected. (Pascal L., reported by Rami Al-Rfou)
 * Better error msg (Ian G.)
 * Move all sparse random functions back to sandbox as they don't have a state inside Theano. (Pascal L.)
   They were moved outside the sandbox in 0.6rc1
 * LoadFromDisk now is allowed to only support some memmap mode. (Pascal L.)
   Otherwise, this was causing errors, segmentation faults or wrong results.
 * Fix import problem on PiCloud (Jeremiah Lowin)
    * You need to use the c|py linker with the default
      environment. Otherwise, you need to create your own environment.
 * Fix a crash during optimization when we take a subtensor of a constant with a non constant index. (Ian G.)
 * Better handling and error message of gradients on integer. (Ian G.)
 * Fixed a crash where Scan assumed all TypeErrors raised by the grad function were due to undefined gradients (Ian G.)

Other:
 * Doc typo fixes, Doc updates, Better error messages: Olivier D., David W.F., Frederic B., James B., Matthew Rocklin, Ian G., abalkin.


Theano 0.6rc1 (October 1st, 2012)
=================================

Highlights:
 * Bug fixes, crash fixes, CPU and GPU speed up.
 * theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.)
 * New default linker `cvm`. This is the execution engine that tells ops to run in certain orders.
   It is now implemented in C and enables lazy evaluation of ifelse op.
 * Faster theano.function compilation. (Pascal L., Ian G.)
 * Big sparse submodule update and documentation of it. (Nicolas Bouchard)
 * Use GPU asynchronous functionality (Frederic B.)
 * Better Windows support.

Known bugs:
 * A few crash cases that will be fixed by the final release.

Bug fixes:
 * Outputs of Scan nodes could contain corrupted values: some parts of the
   output would be repeated a second time, instead of the correct values.
   It happened randomly, and quite infrequently, but the bug has been present
   (both in Python and Cython) since April 2011. (Pascal L.)
 * In Sparse sandbox, fix the grad of theano.sparse.sandbox.sp.row_scale.
   It did not return the right number of elements. (Frederic B.)
 * set_subtensor(x[int vector], new_value) when moved to the GPU
   was transformed into inc_subtensor on the GPU. Now we have a correct
   (but slow) GPU implementation.
   Note 1: set_subtensor(x[slice[,...]], new_value) was working correctly
   in all cases as well as all inc_subtensor.
   Note 2: If your code was affected by the incorrect behavior, we now print
   a warning by default (Frederic B.)
 * Fixed an issue whereby config values were used as default arguments,
   with those defaults then stuck at old values if the config variables were
   changed during program execution. (David W-F)
 * Fixed many subtle bugs involving mutable default arguments which may have
   led to unexpected behavior, such as objects sharing instance variables
   they were not supposed to share. (David W-F)
 * Correctly record the GPU device number used when we let the driver select it.
   (Frederic B.)
 * Min, max with NaN in inputs did not return the right output. (Pascal L.)
 * The grad of TensorDot, was returning the wrong shape for some combination of axes.
   We now raise NotImplementedError in those cases. (Frederic B.)
 * conv2d with subsample >2 returned wrong values. (Pascal L.)
     * Fixed when mode==valid, disabled when mode==full
 * theano.sparse.CSMGrad op (generated by the grad of CSM) didn't
   handle unsorted input correctly and gradient that is sparser
   than the input. In that case, a bad result was returned. But this could
   happen only when a sparse input of a Theano function was not
   sorted. This happens for example with sparse advanced indexing from
   scipy. The conclusion is most of time Nan in the graph.
   (Yann Dauphin)
 * theano.sparse._dot(CSC matrix, dense) optimized version UsmmCSCDense didn't handle
   correctly not contiguous inputs/outputs. (Pascal L.)
 * Fix a corner case CVM updates case. (Pascal L.)
   This happened if the update to a shared variable is itself after optimization.
   The CVM was not used by default.
 * Fix the view_map of sparse.Transpose and sparse.sandbow.sp.RowScale. (Frederic B.)
   This probably didn't cause problem as there is only the UsmmCscDense op
   (used call to Usmm with CSC matrix) that could interfere with them.

Deprecation:
 * Deprecated the Module class (Ian G.)
   This was a predecessor of SharedVariable with a less pythonic philosophy.

Interface changes:
 * Now the base version requirements are numpy >= 1.5.0 and the optional scipy >= 0.7.2.
 * In Theano 0.5, we removed the deprecated sharedvar.value property.
   Now we raise an error if you access it. (Frederic B.)
 * theano.function does not accept duplicate inputs, so function([x, x], ...)
   does not work anymore. (Pascal L.)
 * theano.function now raises an error if some of the provided inputs are
   not part of the computational graph needed to compute the output, for
   instance, function([x, y], [y]). You can use the kwarg
   ``on_unused_input={'raise', 'warn', 'ignore'}`` to control this.
   (Pascal L.)
 * New Theano flag "on_unused_input" that defines the default value of the
   previous point. (Frederic B.)
 * tensor.alloc() now raises an error during graph build time
   when we try to create less dimensions than the number of dimensions
   the provided value have. In the past, the error was at run time.
   (Frederic B.)
 * Remove theano.Value and related stuff (Ian G.)
   This was a test of what ended up as SharedVariable.
 * Renamed Env to FunctionGraph, and object attribute "env" to "fgraph" (Ian G.)
   Deprecation warning printed when you try to access the "env" attribute.
 * Renamed the FunctionGraph.nodes attribute to FunctionNodes.apply_nodes (Ian G.)
 * Warn when we don't handle correctly the parameter in Theano flags `nvcc.flags`
   (Frederic B.)
 * Do not reorder the user flags passed to the compiler. They get set after other flags. (Frederic B.)
 * Make setuptools optional (Ilan Schnell)
 * We warn when a user tries to use an old GPU with which Theano is untested.
   This could cause crash and will also be very slow. (Frederic B.)
 * Make theano.grad able to differentiate between not implemented, undefined and disconnected grad.
   Op.grad function should return theano.gradient.{grad_not_implemented,grad_undefined} or
   something of DisconectedType (Ian G.)
 * Make theano.grad expect to always receive a float or undefined
   gradient and enforce that op with integer output values always
   return 0. (Ian G.)


New memory output contract (was mentioned in the release notes of Theano 0.5):
 * Now the output memory received can be preallocated by other stuff.
   In the past it was always the previous output an Apply node allocated.
   So this means that the shape and strides can be different from previous calls
   and there can be links to this memory at other places.
   This means it could receive preallocated output that is not c_contiguous.
   But we don't do that now. (Pascal L.)
 * New Theano flags to test this DebugMode.check_preallocated_output (Pascal L.)
 * Updated a few ops to respect this contract (Pascal L.)


New Features:
 * GPU scan now works (does not crash) when there is a mixture of float32 and other dtypes.
 * theano_var.eval({other_var:val[,...]} to simplify the usage of Theano (Ian G.)
 * debugprint new param ids=["CHAR", "id", "int", ""]
   This makes the identifier printed to be a unique char, the Python id, a
   unique int, or not have it printed. We changed the default to be "CHAR"
   as this is more readable. (Frederic B.)
 * debugprint new param stop_on_name=[False, True]. If True, we don't print
   anything below an intermediate variable that has a name. Defaults to False.
   (Frederic B.)
 * debugprint does not print anymore the "|" symbol in a column after the last input. (Frederic B.)
 * If you use Enthought Python Distribution (EPD) now we use its blas
   implementation by default. (Frederic B., Graham Taylor, Simon McGregor)
 * MRG random now raises an error with a clear message when the passed shape
   contains dimensions with bad value like 0. (Frederic B. reported by Ian G.)
 * "CudaNdarray[*] = ndarray" works in more cases (Frederic B.)
 * "CudaNdarray[*] += ndarray" works in more cases (Frederic B.)
 * We add dimensions to CudaNdarray to automatically broadcast more frequently.
   (Frederic B.)
 * New theano flag cmodule.warn_no_version. Default False. If True,
   will print a warning when compiling one or more Op with C code that
   can't be cached because there is no c_code_cache_version() function
   associated to at least one of those Ops.  (Frederic B.)
 * CPU alloc now always generate C code (Pascal L.)
 * New Theano flag cmodule.warn_no_version=False. When True, warn when an op
   with C code is not versioned (which forces to recompile it everytimes).
   (Frederic B.)
 * C code reuses preallocated outputs (only done by Scan) (Pascal L.)
 * Garbage collection of intermediate results during Theano function calls
   for Ops with C code (Pascal L.)
 * Theano flag compiledir_format now supports the parameter "numpy_version" and "g++". (Frederic B.)
 * Theano GPU variables, shared variables and constants now support <, <=,
   > and >= similar to those not on the GPU.
 * AdvancedIncSubtensor now supports the set_instead_of_inc parameter. (Eric L.)
 * Added Advanced Indexing support to inc_subtensor and set_subtensor. (Eric L.)
 * theano.tensor.{any,all,std,var,mean,prod,sum,argmin,argmax,min,max,max_and_argman}
   have a new parameter keepdims (Eric L.)
   This allows to broadcast it correctly against the input data to normalize it.
 * The Updates objects now check that the keys are SharedVariable when we pass them
   in the __init__ function. (Pascal L.)
 * Set a Theano Variable name on transposed op when the input has one (Frederic B).
 * The cvm linker now supports garbage collection (enabled by default). (James B. Arnaud B., Pascal L.)
 * The cvm linker is now the default linker.
   This makes the "loop" around the execution of apply node in C. So this lowers the overhead.
 * theano_variable[numpy.newaxis] is now supported (James B.)
 * Enable ifelse on the GPU. (Frederic B.)
 * Correctly support numpy.memmap everywhere (Pascal L.)
   We add partial support for them before. Just use the normal tensor operation
   on them and it should work.
   But be careful not to exhaust your computer memory! (we always generate normal ndarray)
 * Add an optimization that stabilizes log(softmax(x)). (Ian G.)
 * Re-enable the Images2Neibs grad. It was not broken, the problem was how we tested it. (Frederic B.)
 * If `theano_fn.trust_input` is set to False, do not check if the inputs are good
   when calling the theano function. (Frederic B.)
 * Add theano.tensor.blas,gem{m,v} as shortcut.
 * theano.grad(..., add_names=True). False for the old
   behavior. Otherwise it tries to name the grad variables. (Ian G.)
 * theano-nose (Pascal L.)
   A wrapper around nosetests that adds needed extensions.
   * --profile-time option, to print time spent in each test (Eric L.)
   * --batch option, to allow to run tests in batch to lower memory requirement.
 * m = mean(log(1 - sigm(x)))
   x - scalar * theano.grad(m, x)
   There is a stabilization optimization for this.
   Now it is applied more frequently. (Pascal L.)


New Op/functions:
 * Added element-wise operation theano.tensor.{GammaLn,Psi} (John Salvatier, Nicolas Bouchard)
 * Added element-wise operation theano.tensor.{arcsin,arctan,arccosh,arcsinh,arctanh,exp2,arctan2} (Nicolas Bouchard)
 * Added element-wise operation theano.tensor.{gamma,conj,complex_from_polar,expm1,deg2rad,rad2deg,trunc,gamma} (Nicolas Bouchard)
 * Added theano.tensor.argsort that wraps numpy.argsort (Hani Almousli).
 * Added theano.tensor.diff that wraps numpy.diff (Nicolas B.)
 * Added theano.tensor.bincount that wraps numpy.bincount (Nicolas B., Pascal L, Frederic B.)
 * Added theano.tensor.squeeze (Nicolas B.)
   This removes broadcasted dimensions from the variable.
   Theano-esque version of numpy.squeeze.
 * Added theano.tensor.repeat that wraps numpy.repeat (Nicolas B. + PL)
 * Added theano.tensor.bartlett that wraps  numpy.bartlett (Eric L.)
 * Added theano.tensor.fill_diagonal that wraps numpy.fill_diagonal (Eric L., Frederic B.)
 * Added tensor.square that is an alias for tensor.sqr as NumPy (Ian G.)
 * Added theano.tensor.load(path, dtype, broadcastable, mmap_mode=None) op
   that allows to load a .npy file in a theano graph (Matthew Rocklin)
 * theano.sandbox.linalg.kron.py:Kron op. (Eric L.)
   Kronecker product

Speed up:
 * CPU convolutions are now parallelized (Frederic B.)
   By default use all cores/hyper-threads.
   To control it, use the `OMP_NUM_THREADS=N` environment variable where N is the number of
   parallel threads to use. By default it is equal to the number of CPU cores/hyper
   threads that you have.
   There is a new Theano flag `openmp` to allow/disallow openmp op.
   If your BLAS library is parallelized, this flag won't affect it, but the
   env variable will.
 * Remove a corner case causing duplicated dot22/gemm in the graph. (Frederic B., Ian G.)
 * Enable fusion of elemwise that have the same clients multiple times. (Frederic B.)
 * New optimization: Remove reduction over broadcastable dimensions (James B., Frederic B.)
 * Faster theano.function compilation. (Pascal L., Ian G.)
 * Remove GPU transfer around specify_shape op. (Frederic B.)
 * Implemented/tested MANY op.infer_shape method (Eric Larsen)
   This allows Theano to make better shape inferance.
 * Implement Solve.infer_shape (Matthew Rocklin)
 * Scan memory optimizations now work more frequently. (Razvan P.)
   There was a warning printed by the subtensor optimization in those cases.
 * Faster rng_mrg Python code. (mostly used for tests) (Frederic B.)

Speed up GPU:
 * Convolution on the GPU now checks the generation of the card to make
   it faster in some cases (especially medium/big ouput image) (Frederic B.)

     * We had hardcoded 512 as the maximum number of threads per block. Newer cards
       support up to 1024 threads per block.
 * Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
 * We now pass the GPU architecture to nvcc when compiling (Frederic B.)
 * Now we use the GPU function async feature by default. (Frederic B.)
   Set the environment variable `CUDA_LAUNCH_BLOCKING` to `1` to disable this
   for profiling or debugging.
 * Faster creation of CudaNdarray objects (Frederic B.)
 * Now some Max reductions are implemented on the GPU. (Ian G.)

Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
 * sparse.remove0 (Frederic B., Nicolas B.)
 * sparse.sp_sum(a, axis=None) (Nicolas B.)
     * bugfix: the not structured grad was returning a structured grad.
 * sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
 * sparse.{diag,square_diagonal} (Nicolas B.)

Sparse:
 * Support for uint* dtype.
 * Implement theano.sparse.mul(sparse1, sparse2) when both inputs don't
   have the same sparsity pattern. (Frederic B.)
 * New Ops: sparse.{expm1,deg2rad,rad2deg,trunc} (Nicolas B.)
 * New Ops: sparse.{sqrt,sqr,log1p,floor,ceil,sgn,round_half_to_even} (Nicolas B.)
 * New Ops: sparse.{arctanh,tanh,arcsinh,sinh,arctan,arcsin,tan,sin} (Nicolas B.)
 * New functions: structured_{add,exp,log,pow,minimum,maximum,sigmoid} (Yann D., Nicolas B.)
     * Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
 * New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
 * New Op: sparse.Cast() (Yann D., Nicolas B.)
     * Add sparse_variable.astype() and theano.sparse.cast() and
       theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
 * Op class: SamplingDot (Yann D., Nicolas B.)
   * Optimized version: SamplingDotCsr, StructuredDotCSC
   * Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
 * New Ops: sparse.{Multinomial,Poisson,Binomial} (Yann D., NB)
 * Implement the CSMProperties grad method (Yann Dauphin)
 * Move optimizations to theano/sparse/opt.py (Nicolas B.)

New flags:
 * `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
     * It works with the linkers vm/cvm (default).
     * Also print compile time, optimizer time and linker time.
     * Also print a summary by op class.
 * new flag "profile_optimizer" (Frederic B.)
   when profile=True, will also print the time spent in each optimizer.
   Useful to find optimization bottleneck.
 * new flag "cmodule.remove_gxx_opt" (Frederic B.)
   If True, will remove -O* parameter passed to g++.
   This is useful to debug in gdb module compiled by Theano.
   The parameter -g is passed by default to g++.
 * new flag cmodule.compilation_warning
   if True, will print compilation warning.
 * new flag `allow_gc` (Frederic B.)
   When False, do not garbage collect intermediate results when they are not needed.
   This uses more memory, but allocates memory less frequently so faster.
 * new flag `vm.lazy` (Frederic B.)
   Useful only for the vm linkers. When lazy is None,
   auto detect if lazy evaluation is needed and use the apropriate
   version. If lazy is True/False, force the version used between
   Loop/LoopGC and Stack.
 * new flag `cxx`. This is the C++ compiler to use. If empty do not compile C code. (Frederic B.)
 * New flag `print_active_device` that defaults to True. (Matthew R.)

Documentation:
 * Added in the tutorial documentation on how to extend Theano.
   This explains how to make a Theano Op from a Python function.
   http://deeplearning.net/software/theano/tutorial/extending_theano.html
   (Frederic B.)
 * New installation instructions for Windows using EPD (Pascal L.)
 * New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.)
 * Revisions of Theano tutorial and addition of exercises to it. (Eric L.)
 * New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien
   http://www.deeplearning.net/software/theano/tutorial/sparse.html
 * Installation documentation for CentOS6 (Frederic B.)
 * Installation documentation for Ubuntu (with GPU) (Frederic B., Matthias Zoehrer)
 * Doc typo fixes, Doc updates, Better error messages: Olivier D., David W.F., Frederic B., James B., Matthew Rocklin, Ian G.
 * Python Memory Management tutorial (Steven Pigeon, Olivier D.)

Proposal:
 * Math framework for complex gradients (Pascal L.)


Internal changes:
 * Define new exceptions MissingInputError and UnusedInputError, and use them
   in theano.function, instead of TypeError and ValueError. (Pascal L.)
 * Better handling of bitwidth and max values of integers and pointers
   across platforms (Pascal L.)
 * Made a few Ops with C code versioned to reduce compilation time.
   (Frederic B, Pascal L.)
 * Better deletion of files in the compiledir (Frederic B.)
 * Safer import on sort op (Nicolas Pinto)
 * hash_from_dict for elemwise op (Fredric B.)
 * Renamed BadCLinkerOutput into BadThunkOutput. (PL)
 * tensor.utils.shape_of_variables (Matthew R.)
 * Add the numpy abi version and g++/nvcc version in the key of compiled code. (Frederic B.)
 * env.replace_all_validate_remove (Frederic B.)
   This allows global optimizer to ensure it removed some nodes from the graph.
   This is a generic way to catch errors that would otherwise duplicate
   computation.
   * It was used for GEMM and Scan optimization (Frederic B., Razvan P.)
 * Fix how exception are raised in GPU code (James B.)
 * Made code respect pep8: OD, Fred, Pascal L., Nicolas Bouchard, Eric Larsen and others.
 * TensorType and CudaNdarrayType now have a value_zeros method that call CudaNdarray.zeros or
   numpy.zeros with the right dtype. (Pascal L., Olivier D.)
   This allows to have the same code work with both types.
 * Renamed FunctionGraph.extend function to FunctionGraph.attach_feature. (Ian G.)
 * New exception MissingGXX when we try to compile but there is no cxx compiler. (Frederic B.)
 * New fct theano.gof.utils.give_variables_names(...) that gives unique names to variables. (Matthew R.)
 * Use most of the time the new NumPy C-API for later NumPy release. (Frederic B.)
 * New theano.gof.sched.sort_apply_nodes() that will allow other execution ordering. (Matthew R.)
 * New attribute sort_schedule_fn, a way to specify a scheduler to use. (Matthew R.)

Crash Fix:
 * Fix import conflict name (usaar33, Frederic B.)
    * This makes Theano work with PiCloud.
 * Do not try to use the BLAS library when blas.ldflags is manually set to an
   empty string (Frederic B., Pascal L.)
 * When importing theano on a computer without GPU with the Theano
   flags 'device' or 'init_gpu_device' set to gpu* (Frederic B., reported by  Luo Heng)
 * Optimization printed a useless error when scipy was not available. (Frederic B.)
 * GPU conv crash/slowdown on newer hardware (James B.)
 * Better error handling in GPU conv (Frederic B.)
 * GPU optimization that moves element-wise Ops to the GPU. Crash happened in
   a particular execution order of this optimization and the
   element-wise fusion optimization when upcasting some inputs to
   float32 (to compute them on the GPU).
   (Frederic B., reported by Sander Dieleman)
 * GpuReshape in some particular case when the input is not contiguous
   (Frederic B., reported by Sander Dieleman)
 * GpuSoftmaxWithBias with shape (0, N) with N > 1.
   (Frederic B., reported by Razvan P.)
 * Fix crash under 64-bit Windows, when taking subtensors of the form a[n:]
   (Pascal L., reported by Simon McGregor)
 * Fixed issue with the MaxAndArgmax Op not properly preserving broadcastable
   dimensions, which could typically result in optimization crashes (Olivier D.)
 * Fixed crash when concatenating some arrays with specific broadcasting
   patterns (Olivier D.)
 * Work around a known issue with nvcc 4.1 on MacOS X. (Graham Taylor)
 * In advanced indexing, if some inputs are constant, no need to call constant(...)
   on their value any more. (Pascal L., reported by John Salvatier)
 * Fix crash on GPU when the GpuSubtensor didn't put the right stride
   when the result tensor had a dimension with size of 1. (Pascal L,
   reported Graham T.)
 * Fix scan crash that made it not run on the GPU in one case. (Guillaume D.)
 * If you grad again a random state, don't crash (Razvan P.)
 * GpuDownsampleFactorMax and its grad with inputs dimensions 0 and 1 bigger then 65535.
   (Frederic B. reported by Gabe Schwartz)
 * Potential crash due to parallel compilation when importing theano.sandbox.cuda
   (Olivier D.)
 * Crash fix on python 2.4 with slicing. (Pascal L.)
 * grad of argmin and argmax (Razvan P.)
 * Don't compute the Rop for shared variables with updates (mostly random).
   We don't use them and they caused crash. (Razvan P.)
 * MaxArgmax.grad() when one of the gradient it receives is None. (Razvan P, reported by Mark Fenner)
 * Fix crash of GpuSum when some dimensions shape was 0. (Frederic B.)

Tests:
 * Use less memory (Olivier D.) (fix crash on 32-bit computers)
 * Fix test with Theano flag "blas.ldflags=". (Frederic B., Pascal L.)
 * Fix crash with advanced subtensor and numpy constant.
 * Fix random tests crash due to random value. (Pascal L.)
 * Always introduce Alloc node when calling alloc and let the optimizer remove them if needed.
   This allows DebugMode to catch some shape error. (Pascal L.)
 * DebugMode now checks the view_map for all types of Theano variables.
   It was doing only variables of tensor type. (Frederic B.)

Others:
 * Remove python warning for some python version. (Gabe Schwartz)
 * Remove useless fill op in fast_compile mode to make the graph more readable. (Fredric B.)
 * Remove GpuOuter as it is a subset of the new GpuGer (Frederic B.)
 * Now we use http://travis-ci.org/ to run all CPU tests (without SciPy)
   with the default mode on all Pull Requests.
   This should make the trunk more stable. (Fredric B.)
 * Our nightly buildbot now checks on python 2.4 (Frederic B.)
   This should make the trunk work on it more frequently.

Other thanks:
 * blaxill reported an error introduced into the trunk.

New stuff that will probably be reworked/removed before the release:
 * Better PyCUDA sharing of the GPU context.(fix crash at exit) (Frederic B.)
   TODO: there is still a crash at exit!


Theano 0.5 (23 February 2012)
=============================

Highlights:
 * Moved to github: http://github.com/Theano/Theano/
 * Old trac tickets moved to assembla tickets: http://www.assembla.com/spaces/theano/tickets
 * Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
 * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
 * Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm
   and dot(vector, vector). (James, Frédéric, Pascal)
 * C implementation of Alloc. (James, Pascal)
 * theano.grad() now also works with sparse variables. (Arnaud)
 * Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan)
 * See the Interface changes.


Interface Behavior Changes:
 * The current default value of the parameter axis of
   theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
   numpy: None. i.e. operate on all dimensions of the tensor.
   (Frédéric Bastien, Olivier Delalleau) (was deprecated and generated
   a warning since Theano 0.3 released Nov. 23rd, 2010)
 * The current output dtype of sum with input dtype [u]int* is now always [u]int64.
   You can specify the output dtype with a new dtype parameter to sum.
   The output dtype is the one used for the summation.
   There is no warning in previous Theano versions about this.
   The consequence is that the sum is done in a dtype with more precision than before.
   So the sum could be slower, but will be more resistant to overflow.
   This new behavior is the same as numpy. (Olivier, Pascal)
 * When using a GPU, detect faulty nvidia drivers. This was detected
   when running Theano tests. Now this is always tested. Faulty
   drivers result in wrong results for reduce operations. (Frederic B.)


Interface Features Removed (most were deprecated):
 * The string modes FAST_RUN_NOGC and STABILIZE are not accepted. They
   were accepted only by theano.function().
   Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
 * tensor.grad(cost, wrt) now always returns an object of the "same type" as wrt
   (list/tuple/TensorVariable). (Ian Goodfellow, Olivier)
 * A few tag.shape and Join.vec_length left have been removed. (Frederic)
 * The .value attribute of shared variables is removed, use shared.set_value()
   or shared.get_value() instead. (Frederic)
 * Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
   If you use it, Theano will now raise an error. (Olivier D.)
 * scan interface changes: (Razvan Pascanu)
    * The use of `return_steps` for specifying how many entries of the output
      to return has been removed. Instead, apply a subtensor to the output
      returned by scan to select a certain slice.
    * The inner function (that scan receives) should return its outputs and
      updates following this order:
        [outputs], [updates], [condition].
      One can skip any of the three if not used, but the order has to stay unchanged.

Interface bug fix:
 * Rop in some case should have returned a list of one Theano variable,
   but returned the variable itself. (Razvan)

New deprecation (will be removed in Theano 0.6, warning generated if you use them):
 * tensor.shared() renamed to tensor._shared(). You probably want to
   call theano.shared() instead! (Olivier D.)


Bug fixes (incorrect results):
 * On CPU, if the convolution had received explicit shape information,
   they were not checked at runtime.  This caused wrong result if the
   input shape was not the one expected. (Frederic, reported by Sander
   Dieleman)
 * Theoretical bug: in some case we could have GPUSum return bad value.
   We were not able to reproduce this problem
     * patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):
       01, 011, 0111, 010, 10, 001, 0011, 0101 (Frederic)
 * div by zero in verify_grad. This hid a bug in the grad of Images2Neibs. (James)
 * theano.sandbox.neighbors.Images2Neibs grad was returning a wrong value.
   The grad is now disabled and returns an error. (Frederic)
 * An expression of the form "1 / (exp(x) +- constant)" was systematically matched to "1 / (exp(x) + 1)"
   and turned into a sigmoid regardless of the value of the constant. A warning will be issued if your
   code was affected by this bug. (Olivier, reported by Sander Dieleman)
 * When indexing into a subtensor of negative stride (for instance, x[a:b:-1][c]),
   an optimization replacing it with a direct indexing (x[d]) used an incorrect formula,
   leading to incorrect results. (Pascal, reported by Razvan)
 * The tile() function  is now stricter in what it accepts to allow for better
   error-checking/avoiding nonsensical situations. The gradient has been
   disabled for the time being as it only implemented (incorrectly) one special
   case. The `reps` argument must be a constant (not a tensor variable), and
   must have the same length as the number of dimensions in the `x` argument;
   this is now checked. (David)


Scan fixes:
 * computing grad of a function of grad of scan (reported by Justin Bayer, fix by Razvan)
   before: most of the time crash, but could be wrong value with bad number of dimensions (so a visible bug)
   now: do the right thing.
 * gradient with respect to outputs using multiple taps (reported by Timothy, fix by Razvan)
   before: it used to return wrong values
   now: do the right thing.
   Note: The reported case of this bug was happening in conjunction with the
         save optimization of scan that give run time errors. So if you didn't
         manually disable the same memory optimization (number in the list4),
         you are fine if you didn't manually request multiple taps.
 * Rop of gradient of scan (reported by Timothy and Justin Bayer, fix by Razvan)
   before: compilation error when computing R-op
   now: do the right thing.
 * save memory optimization of scan (reported by Timothy and Nicolas BL, fix by Razvan)
   before: for certain corner cases used to result in a runtime shape error
   now: do the right thing.
 * Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
 * Scan.infer_shape now works correctly when working with a condition for the number of loops.
   In the past, it returned n_steps as the length, which is not always true. (Razvan)
 * Scan.infer_shape crash fix. (Razvan)

New features:
 * AdvancedIncSubtensor grad defined and tested (Justin Bayer)
 * Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
 * tensor.{zeros,ones}_like now supports the dtype param as numpy (Frederic)
 * Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
 * theano-cache list: list the content of the theano cache (Frederic)
 * theano-cache unlock: remove the Theano cache lock (Olivier)
 * tensor.ceil_int_div to compute ceil(a / float(b)) (Frederic)
 * MaxAndArgMax.grad now works with any axis (The op supports only 1 axis) (Frederic)
     * used by tensor.{max,min,max_and_argmax}
 * tensor.{all,any} (Razvan)
 * tensor.roll as numpy: (Matthew Rocklin, David Warde-Farley)
 * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
 * IfElse now allows to have a list/tuple as the result of the if/else branches.
     * They must have the same length and corresponding type (Razvan)
 * Argmax output dtype is now int64 instead of int32. (Olivier)
 * Added the element-wise operation arccos. (Ian)
 * Added sparse dot with dense grad output. (Yann Dauphin)
     * Optimized to Usmm and UsmmCscDense in some case (Yann)
     * Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs.
       The new theano.sparse.dot() has a dense gradient for all inputs.
 * GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic)
 * TensorVariable.zeros_like() and SparseVariable.zeros_like()
 * theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.device_properties() (Frederic)
 * theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.mem_info() return free and total gpu memory (Frederic)
 * Theano flags compiledir_format. Keep the same default as before: compiledir_%(platform)s-%(processor)s-%(python_version)s. (Josh Bleecher Snyder)
     * We also support the "theano_version" substitution.
 * IntDiv C code (faster and allows this elemwise to be fused with other elemwise) (Pascal)
 * Internal filter_variable mechanism in Type. (Pascal, Ian)
    * Ifelse works on sparse.
    * It makes use of gpu shared variable more transparent with theano.function updates and givens parameter.
 * Added a_tensor.transpose(axes) axes is optional (James)
    * theano.tensor.transpose(a_tensor, kwargs) We were ignoring kwargs, now it is used as the axes.
 * a_CudaNdarray_object[*] = int, now works (Frederic)
 * tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
 * sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
 * sparse_variable[N, N] now works (Li Yao, Frederic)
 * sparse_variable[M:N, O:P] now works (Li Yao, Frederic, Pascal)
   M, N, O, and P can be Python int or scalar tensor variables, None, or
   omitted (sparse_variable[:, :M] or sparse_variable[:M, N:] work).
 * tensor.tensordot can now be moved to GPU (Sander Dieleman,
   Pascal, based on code from Tijmen Tieleman's gnumpy,
   http://www.cs.toronto.edu/~tijmen/gnumpy.html)
 * Many infer_shape implemented on sparse matrices op. (David W.F.)
 * Added theano.sparse.verify_grad_sparse to easily allow testing grad of
   sparse op. It supports testing the full and structured gradients.
 * The keys in our cache now store the hash of constants and not the constant values
   themselves. This is significantly more efficient for big constant arrays. (Frederic B.)
 * 'theano-cache list' lists key files bigger than 1M (Frederic B.)
 * 'theano-cache list' prints an histogram of the number of keys per compiled module (Frederic B.)
 * 'theano-cache list' prints the number of compiled modules per op class (Frederic B.)
 * The Theano flag "nvcc.fastmath" is now also used for the cuda_ndarray.cu file.
 * Add the header_dirs to the hard part of the compilation key. This is
   currently used only by cuda, but if we use libraries that are only headers,
   this can be useful. (Frederic B.)
 * The Theano flag "nvcc.flags" is now included in the hard part of the key.
   This means that now we recompile all modules for each value of "nvcc.flags".
   A change in "nvcc.flags" used to be ignored for modules that were already
   compiled. (Frederic B.)
 * Alloc, GpuAlloc are not always pre-computed (constant_folding optimization)
   at compile time if all their inputs are constant.
   (Frederic B., Pascal L., reported by Sander Dieleman)
 * New Op tensor.sort(), wrapping numpy.sort (Hani Almousli)


New optimizations:
 * AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
 * dot22, dot22scalar work with complex. (Frederic)
 * Generate Gemv/Gemm more often. (James)
 * Remove scan when all computations can be moved outside the loop. (Razvan)
 * scan optimization done earlier. This allows other optimizations to be applied. (Frederic, Guillaume, Razvan)
 * exp(x) * sigmoid(-x) is now correctly optimized to the more stable form sigmoid(x). (Olivier)
 * Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization. (Guillaume)
 * Made the optimization process faster. (James)
 * Allow fusion of elemwise when the scalar op needs support code. (James)
 * Better opt that lifts transpose around dot. (James)


Crashes fixed:
 * T.mean crash at graph building time. (Ian)
 * "Interactive debugger" crash fix. (Ian, Frederic)
 * Do not call gemm with strides 0, some blas refuse it. (Pascal Lamblin)
 * Optimization crash with gemm and complex. (Frederic)
 * GPU crash with elemwise. (Frederic, some reported by Chris Currivan)
 * Compilation crash with amdlibm and the GPU. (Frederic)
 * IfElse crash. (Frederic)
 * Execution crash fix in AdvancedSubtensor1 on 32 bit computers. (Pascal)
 * GPU compilation crash on MacOS X. (Olivier)
 * Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
 * When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
 * Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
 * Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic)
 * Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
 * Fix runtime crash in gemm, dot22. FB
 * Fix on 32 bit computer: make sure all shapes are int64. (Olivier)
 * Fix to deque on python 2.4 (Olivier)
 * Fix crash when not using C code (or using DebugMode) (not used by
   default) with numpy 1.6*. Numpy has a bug in the reduction code that
   made it crash. (Pascal)
 * Crashes of blas functions (Gemv on CPU; Ger, Gemv and Gemm on GPU)
   when matrices had non-unit stride in both dimensions (CPU and GPU),
   or when matrices had negative strides (GPU only). In those cases,
   we are now making copies. (Pascal)
 * More cases supported in AdvancedIncSubtensor1. (Olivier D.)
 * Fix crash when a broadcasted constant was used as input of an
   elemwise Op and needed to be upcasted to match the op's output.
   (Reported by John Salvatier, fixed by Pascal L.)
 * Fixed a memory leak with shared variable (we kept a pointer to the original value) (Ian G.)


Known bugs:
 * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
     * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.


Sandbox:
 * cvm interface more consistent with current linker. (James)
   * Now all tests pass with the linker=cvm flags.
 * vm linker has a callback parameter. (James)
 * review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier)
 * review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume)
 * review/finish/doc: MatrixInverse, matrix_inverse. (Razvan)
 * review/finish/doc: matrix_dot. (Razvan)
 * review/finish/doc: det (determinent) op. (Philippe Hamel)
 * review/finish/doc: Cholesky determinent op. (David)
 * review/finish/doc: ensure_sorted_indices. (Li Yao)
 * review/finish/doc: spectral_radius_boud. (Xavier Glorot)
 * review/finish/doc: sparse sum. (Valentin Bisson)
 * review/finish/doc: Remove0 (Valentin)
 * review/finish/doc: SquareDiagonal (Eric)


Sandbox New features (not enabled by default):
 * CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James)
 * New sandbox.linalg.ops.pinv(pseudo-inverse) op (Razvan)


Documentation:
 * Many updates. (Many people)
 * Updates to install doc on MacOS. (Olivier)
 * Updates to install doc on Windows. (David, Olivier)
 * Doc on the Rop function (Ian)
 * Added how to use scan to loop with a condition as the number of iteration. (Razvan)
 * Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic)
 * Refactored GPU installation of Theano. (Olivier)


Others:
 * Better error messages in many places. (Many people)
 * PEP8 fixes. (Many people)
 * Add a warning about numpy bug when using advanced indexing on a
   tensor with more than 2**32 elements (the resulting array is not
   correctly filled and ends with zeros). (Pascal, reported by David WF)
 * Added Scalar.ndim=0 and ScalarSharedVariable.ndim=0 (simplify code) (Razvan)
 * New min_informative_str() function to print graph. (Ian)
 * Fix catching of exception. (Sometimes we used to catch interrupts) (Frederic, David, Ian, Olivier)
 * Better support for utf string. (David)
 * Fix pydotprint with a function compiled with a ProfileMode (Frederic)
     * Was broken with change to the profiler.
 * Warning when people have old cache entries. (Olivier)
 * More tests for join on the GPU and CPU. (Frederic)
 * Do not request to load the GPU module by default in scan module. (Razvan)
 * Fixed some import problems. (Frederic and others)
 * Filtering update. (James)
 * On Windows, the default compiledir changed to be local to the
   computer/user and not transferred with roaming profile. (Sebastian
   Urban)
 * New theano flag "on_shape_error". Defaults to "warn" (same as previous behavior):
   it prints a warning when an error occurs when inferring the shape of some apply node.
   The other accepted value is "raise" to raise an error when this happens. (Frederic)
 * The buidbot now raises optimization/shape errors instead of just printing a warning. (Frederic)
 * better pycuda tests (Frederic)
 * check_blas.py now accepts the shape and the number of iterations as parameter (Frederic)
 * Fix opt warning when the opt ShapeOpt is disabled (enabled by default) (Frederic)
 * More internal verification on what each op.infer_shape return. (Frederic, James)
 * Improved docstring and basic tests for the Tile Op (David).

Reviewers (alphabetical order):
 * David, Frederic, Ian, James, Olivier, Razvan


Theano 0.4.1 (12 August 2011)
=============================

New features:

 * `R_op <http://deeplearning.net/software/theano/tutorial/gradients.html>`_ macro like theano.tensor.grad

   * Not all tests are done yet (TODO)
 * Added alias theano.tensor.bitwise_{and,or,xor,not}. They are the numpy names.
 * Updates returned by Scan (you need to pass them to the theano.function) are now a new Updates class.
   That allow more check and easier work with them. The Updates class is a subclass of dict
 * Scan can now work in a "do while" loop style.

   * We scan until a condition is met.
   * There is a minimum of 1 iteration(can't do "while do" style loop)
 * The "Interactive Debugger" (compute_test_value theano flags)

   * Now should work with all ops (even the one with only C code)
   * In the past some errors were caught and re-raised as unrelated errors (ShapeMismatch replaced with NotImplemented). We don't do that anymore.
 * The new Op.make_thunk function(introduced in 0.4.0) is now used by constant_folding and DebugMode
 * Added A_TENSOR_VARIABLE.astype() as a way to cast. NumPy allows this syntax.
 * New BLAS GER implementation.
 * Insert GEMV more frequently.
 * Added new ifelse(scalar condition, rval_if_true, rval_if_false) Op.

   * This is a subset of the elemwise switch (tensor condition, rval_if_true, rval_if_false).
   * With the new feature in the sandbox, only one of rval_if_true or rval_if_false will be evaluated.

Optimizations:

 * Subtensor has C code
 * {Inc,Set}Subtensor has C code
 * ScalarFromTensor has C code
 * dot(zeros,x) and dot(x,zeros)
 * IncSubtensor(x, zeros, idx) -> x
 * SetSubtensor(x, x[idx], idx) -> x (when x is a constant)
 * subtensor(alloc,...) -> alloc
 * Many new scan optimization

   * Lower scan execution overhead with a Cython implementation
   * Removed scan double compilation (by using the new Op.make_thunk mechanism)
   * Certain computations from the inner graph are now Pushed out into the outer
     graph. This means they are not re-comptued at every step of scan.
   * Different scan ops get merged now into a single op (if possible), reducing
     the overhead and sharing computations between the two instances

GPU:

 * PyCUDA/CUDAMat/Gnumpy/Theano bridge and `documentation <http://deeplearning.net/software/theano/tutorial/gpu_data_convert.html>`_.

   * New function to easily convert pycuda GPUArray object to and from CudaNdarray object
   * Fixed a bug if you crated a view of a manually created CudaNdarray that are view of GPUArray.
 * Removed a warning when nvcc is not available and the user did not requested it.
 * renamed config option cuda.nvccflags -> nvcc.flags
 * Allow GpuSoftmax and GpuSoftmaxWithBias to work with bigger input.

Bugs fixed:

 * In one case an AdvancedSubtensor1 could be converted to a GpuAdvancedIncSubtensor1 insted of GpuAdvancedSubtensor1.
   It probably didn't happen due to the order of optimizations, but that order is not guaranteed to be the same on all computers.
 * Derivative of set_subtensor was wrong.
 * Derivative of Alloc was wrong.

Crash fixed:

 * On an unusual Python 2.4.4 on Windows
 * When using a C cache copied from another location
 * On Windows 32 bits when setting a complex64 to 0.
 * Compilation crash with CUDA 4
 * When wanting to copy the compilation cache from a computer to another

   * This can be useful for using Theano on a computer without a compiler.
 * GPU:

   * Compilation crash fixed under Ubuntu 11.04
   * Compilation crash fixed with CUDA 4.0

Know bug:

 * CAReduce with nan in inputs don't return the good output (`Ticket <http://trac-hg.assembla.com/theano/ticket/763>`_).

   * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
   * This is not a new bug, just a bug discovered since the last release that we didn't had time to fix.

Deprecation (will be removed in Theano 0.5, warning generated if you use them):

 * The string mode (accepted only by theano.function()) FAST_RUN_NOGC. Use Mode(linker='c|py_nogc') instead.
 * The string mode (accepted only by theano.function()) STABILIZE. Use Mode(optimizer='stabilize') instead.
 * scan interface change:

   * The use of `return_steps` for specifying how many entries of the output
     scan has been deprecated

     * The same thing can be done by applying a subtensor on the output
       return by scan to select a certain slice
   * The inner function (that scan receives) should return its outputs and
     updates following this order:

        [outputs], [updates], [condition]. One can skip any of the three if not
        used, but the order has to stay unchanged.
 * tensor.grad(cost, wrt) will return an object of the "same type" as wrt
   (list/tuple/TensorVariable).

   * Currently tensor.grad return a type list when the wrt is a list/tuple of
     more than 1 element.

Decrecated in 0.4.0(Reminder, warning generated if you use them):

 * Dividing integers with / is deprecated: use // for integer division, or
   cast one of the integers to a float type if you want a float result (you may
   also change this behavior with config.int_division).
 * tag.shape attribute deprecated (#633)
 * CudaNdarray_new_null is deprecated in favour of CudaNdarray_New

Sandbox:

 * MRG random generator now implements the same casting behavior as the regular random generator.

Sandbox New features(not enabled by default):

 * New Linkers (theano flags linker={vm,cvm})

   * The new linker allows lazy evaluation of the new ifelse op, meaning we compute only the true or false branch depending of the condition. This can speed up some types of computation.
   * Uses a new profiling system (that currently tracks less stuff)
   * The cvm is implemented in C, so it lowers Theano's overhead.
   * The vm is implemented in python. So it can help debugging in some cases.
   * In the future, the default will be the cvm.
 * Some new not yet well tested sparse ops: theano.sparse.sandbox.{SpSum, Diag, SquareDiagonal, ColScaleCSC, RowScaleCSC, Remove0, EnsureSortedIndices, ConvolutionIndices}

Documentation:

 * How to compute the `Jacobian, Hessian, Jacobian times a vector, Hessian times a vector <http://deeplearning.net/software/theano/tutorial/gradients.html>`_.
 * Slide for a 3 hours class with exercises that was done at the HPCS2011 Conference in Montreal.

Others:

 * Logger name renamed to be consistent.
 * Logger function simplified and made more consistent.
 * Fixed transformation of error by other not related error with the compute_test_value Theano flag.
 * Compilation cache enhancements.
 * Made compatible with NumPy 1.6 and SciPy 0.9
 * Fix tests when there was new dtype in NumPy that is not supported by Theano.
 * Fixed some tests when SciPy is not available.
 * Don't compile anything when Theano is imported. Compile support code when we compile the first C code.
 * Python 2.4 fix:

   * Fix the file theano/misc/check_blas.py
   * For python 2.4.4 on Windows, replaced float("inf") with numpy.inf.
 * Removes useless inputs to a scan node

   * Beautification mostly, making the graph more visible. Such inputs would appear as a consequence of other optimizations

Core:

 * there is a new mechanism that lets an Op permit that one of its
   inputs to be aliased to another destroyed input.  This will generally
   result in incorrect calculation, so it should be used with care!  The
   right way to use it is when the caller can guarantee that even if
   these two inputs look aliased, they actually will never overlap. This
   mechanism can be used, for example, by a new alternative approach to
   implementing Scan.  If an op has an attribute called
   "destroyhandler_tolerate_aliased" then this is what's going on.
   IncSubtensor is thus far the only Op to use this mechanism.Mechanism

Theano 0.4.0 (2011-06-13)
=========================

Change in output memory storage for Ops:
 If you implemented custom Ops, with either C or Python implementation,
 this will concern you.

 The contract for memory storage of Ops has been changed. In particular,
 it is no longer guaranteed that output memory buffers are either empty,
 or allocated by a previous execution of the same Op.

 Right now, here is the situation:
  * For Python implementation (perform), what is inside output_storage
    may have been allocated from outside the perform() function, for
    instance by another node (e.g., Scan) or the Mode. If that was the
    case, the memory can be assumed to be C-contiguous (for the moment).
  * For C implementations (c_code), nothing has changed yet.

 In a future version, the content of the output storage, both for Python and C
 versions, will either be NULL, or have the following guarantees:
  * It will be a Python object of the appropriate Type (for a Tensor variable,
    a numpy.ndarray, for a GPU variable, a CudaNdarray, for instance)
  * It will have the correct number of dimensions, and correct dtype
 However, its shape and memory layout (strides) will not be guaranteed.

 When that change is made, the config flag DebugMode.check_preallocated_output
 will help you find implementations that are not up-to-date.

Deprecation:
 * tag.shape attribute deprecated (#633)
 * CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
 * Dividing integers with / is deprecated: use // for integer division, or
   cast one of the integers to a float type if you want a float result (you may
   also change this behavior with config.int_division).
 * Removed (already deprecated) sandbox/compile module
 * Removed (already deprecated) incsubtensor and setsubtensor functions,
   inc_subtensor and set_subtensor are to be used instead.

Bugs fixed:
 * In CudaNdarray.__{iadd,idiv}__, when it is not implemented, return the error.
 * THEANO_FLAGS='optimizer=None' now works as expected
 * Fixed memory leak in error handling on GPU-to-host copy
 * Fix relating specifically to Python 2.7 on Mac OS X
 * infer_shape can now handle Python longs
 * Trying to compute x % y with one or more arguments being complex now
   raises an error.
 * The output of random samples computed with uniform(..., dtype=...) is
   guaranteed to be of the specified dtype instead of potentially being of a
   higher-precision dtype.
 * The perform() method of DownsampleFactorMax did not give the right result
   when reusing output storage. This happen only if you use the Theano flags
   'linker=c|py_nogc' or manually specify the mode to be 'c|py_nogc'.

Crash fixed:
 * Work around a bug in gcc 4.3.0 that make the compilation of 2d convolution
   crash.
 * Some optimizations crashed when the "ShapeOpt" optimization was disabled.

Optimization:
 * Optimize all subtensor followed by subtensor.

GPU:
 * Move to the gpu fused elemwise that have other dtype then float32 in them
   (except float64) if the input and output are float32.
   * This allow to move elemwise comparisons to the GPU if we cast it to
     float32 after that.
 * Implemented CudaNdarray.ndim to have the same interface in ndarray.
 * Fixed slowdown caused by multiple chained views on CudaNdarray objects
 * CudaNdarray_alloc_contiguous changed so as to never try to free
   memory on a view: new "base" property
 * Safer decref behaviour in CudaNdarray in case of failed allocations
 * New GPU implementation of tensor.basic.outer
 * Multinomial random variates now available on GPU

New features:
 * ProfileMode
    * profile the scan overhead
    * simple hook system to add profiler
    * reordered the output to be in the order of more general to more specific
 * DebugMode now checks Ops with different patterns of preallocated memory,
   configured by config.DebugMode.check_preallocated_output.
 * var[vector of index] now work, (grad work recursively, the direct grad
   work inplace, gpu work)
    * limitation: work only of the outer most dimensions.
 * New way to test the graph as we build it. Allow to easily find the source
   of shape mismatch error:
   `http://deeplearning.net/software/theano/tutorial/debug_faq.html#interactive-debugger`__
 * cuda.root inferred if nvcc is on the path, otherwise defaults to
   /usr/local/cuda
 * Better graph printing for graphs involving a scan subgraph
 * Casting behavior can be controlled through config.cast_policy,
   new (experimental) mode.
 * Smarter C module cache, avoiding erroneous usage of the wrong C
   implementation when some options change, and avoiding recompiling the
   same module multiple times in some situations.
 * The "theano-cache clear" command now clears the cache more thoroughly.
 * More extensive linear algebra ops (CPU only) that wrap scipy.linalg
   now available in the sandbox.
 * CUDA devices 4 - 16 should now be available if present.
 * infer_shape support for the View op, better infer_shape support in Scan
 * infer_shape supported in all case of subtensor
 * tensor.grad now gives an error by default when computing the gradient
   wrt a node that is disconnected from the cost (not in the graph, or
   no continuous path from that op to the cost).
 * New tensor.isnan and isinf functions.

Documentation:
 * Better commenting of cuda_ndarray.cu
 * Fixes in the scan documentation: add missing declarations/print statements
 * Better error message on failed __getitem__
 * Updated documentation on profile mode
 * Better documentation of testing on Windows
 * Better documentation of the 'run_individual_tests' script

Unit tests:
 * More strict float comparaison by default
 * Reuse test for subtensor of tensor for gpu tensor(more gpu test)
 * Tests that check for aliased function inputs and assure appropriate copying
   (#374)
 * Better test of copies in CudaNdarray
 * New tests relating to the new base pointer requirements
 * Better scripts to run tests individually or in batches
 * Some tests are now run whenever cuda is available and not just when it has
   been enabled before
 * Tests display less pointless warnings.

Other:
 * Correctly put the broadcast flag to True in the output var of
   a Reshape op when we receive an int 1 in the new shape.
 * pydotprint: high contrast mode is now the default, option to print
   more compact node names.
 * pydotprint: How trunk label that are too long.
 * More compact printing (ignore leading "Composite" in op names)


Theano 0.3.1 (2011-02-21)
=========================

Deprecation:
 * The theano shared variable attribute `value` is deprecated, use `get_value()` or `set_value()`!
    See http://deeplearning.net/software/theano/tutorial/aliasing.html

Bugs fixed:
 * The random number generator in theano/sandbox/rng_mrg.py did not always return the same sequence of number on the CPU and GPU.
    * In some cases, there was a (possibly large) fraction of non-random garbage in the returned sequence.

 * In python mode (not the default mode) when input of elemwise operation was an empty ndarray, we were not returning an empty ndarray.
 * Scan cached the number of steps. This caused no problem because each time you called scan the number of steps would got refreshed.
   The problem was when you called ScanGrad which would use the cached number of steps without refreshing it.
   To be affected by this bug, one would have to compile two graph, one that would contain a Scan and the other the corresponding GradScan, and
   call the first function to cache the number of steps, and then call the second function with a different number of steps.
 * In GpuConv, errors in conv_patch_stack_reduce when the entire kernel doesn't fit into shared memory.
   The error was not found before as the impact was less then the relative tolerance of 1e-3. Now the relative tolerance is 1e-5.

Crash fixed:
 * Add a feature to not have an exception that makes Theano crash when taking the gradient on DimShuffle in some particular case.
 * Compilation crash for GpuElemwise with tensor with high number of dimensions (~6 or more).
 * Disabled C code generator that make gcc crash on complex type.
 * Crash in optimization when an Op has no input.
 * Output shape is now computed correctly for matrix-vector multiplication on GPU.
 * In Scan, when using numbers as inputs, not symbolic variables.
 * In GradScan, when there is only 1 inputs in the Scan.
 * In GpuSum, bug in calculation of n_blocks for the 10 pattern. (Sum on the row of a matrix)
 * Some segfault at exit with GPU code.

Optimization:
 * New SpecifyShape op that allow to pass more shape info in the graph.
 * Speed up gemv by a work around scipy gemv slowness when the matrix is in C order (the default).
 * Remove join of only 1 element.
 * During optimization, consider one more case in get_constant_value.

GPU:
 * cuda_shared.value = X now works inplace!
     * cuda_shared_var.set_value(new_ndarray) will overwrite the old value inplace in the most common case.
 * Allow to create a CudaNdarraySharedVariable from a CudaNdarray.
 * New init_gpu_device theano flags.
 * Fuse GpuElemwise more often (in the case where there are so many inputs that fusing them all would bust the 256 bytes limit of parameter to gpu function).
 * CPU join of only 1 element that was not moved to the GPU.

New features:
 * tensor.reshape now makes dimensions of length 1 broadcastable.
 * tensor.prod now implements the gradient.
 * DebugMode now warns if an Op declared itself as returning a view of the input but did not do so.
    * This behaviour is a problem, because it can block other Ops from being inplace on the same inputs. This could lower the reuse of memory.
 * Sparse.structured_dot now works when both matrices are sparse
 * Sparse type is now supported by the shape op, and the ShapeFeature optimizer works correctly with them.
 * New 3D convolution ops, with CPU and GPU implementations.
 * New colors in pydotprint.

Documentation:
 * Documented lib.amdlibm and (new) init_gpu_device config variables.
 * A new page (was done for 0.3 but an error was hiding it on the web page) on the memory aliasing contract of Theano.
 * Revision to the Windows installation instructions.
 * The cuda documentation is now generated on the web server.
 * Better documentation of .theanorc and its sections.

Unit tests:
 * Stop usage of deprecated functions or syntax in the unit tests.
 * Better testing of GPU convolution nets.
 * Make more tests able to use different random seeds.
 * Tests of sparse now use default mode, not a hard-coded one.
 * Remove some tests of unimplemented features.

Other:
 * The name of compiledir now includes the Python version to make it easier for people with many Python versions
 * Added theano.tensor.std as a shortcut to sqrt(var(input=input, axis=axis)).
 * Whitespace, tabulation and indentation clean-up in the code.
 * Better detection of memory sharing between variables.


Theano 0.3 (2010-11-23)
=======================

This is the first major release of Theano since 0.1. Version 0.2 development started internally but it was never advertised as a release.

There have been so many changes since 0.1 that we have lost track of many of them. Below is a *partial* list of changes since 0.1.

 * GPU code using NVIDIA's CUDA framework is now generated for many Ops.
 * Some interface changes since 0.1:
     * A new "shared variable" system to allow reusing memory space between Theano functions.
         * A new memory contract has been formally written for Theano, for people who want to minimize memory copies.
     * The old module system has been deprecated.
     * By default, inputs to a Theano function will not be silently downcasted (e.g. from float64 to float32).
     * An error is now raised when using the result of logical operation on Theano variable in an 'if' (i.e. an implicit call to __nonzeros__).
     * An error is now raised when we receive a non-aligned ndarray as input to a function (this is not supported).
     * An error is raised when the list of dimensions passed to dimshuffle() contains duplicates or is otherwise not sensible.
     * Call NumPy BLAS bindings for gemv operations in addition to the already supported gemm.
     * If gcc is unavailable at import time, Theano now falls back to a Python-based emulation mode after raising a warning.
     * An error is now raised when tensor.grad is called on a non-scalar Theano variable (in the past we would implicitly do a sum on the tensor to make it a scalar).
     * Added support for "erf" and "erfc" functions.
 * The current default value of the parameter axis of theano.{max,min,argmax,argmin,max_and_argmax} is deprecated. We now use the default NumPy behavior of operating on the entire tensor.
 * Theano is now available from PyPI and installable through "easy_install" or "pip".


Theano 0.1
==========

*Release date: 2009-04-02*

What works
----------

- building symbolic expression.
- arranging symbolic expressions into Modules so that multiple functions
  can work on the same data.
- symbolic gradient descent.
- graph optimization.
- compilation to C for many kinds of expression.
- a debugging mode that checks that your expression results are correct,
  using a variety of sanity checks.

What's missing?
---------------

- An algorithm library. We're missing a library of examples and standard
  component implementations.  Some examples will find their way into
  the Theano repo, but standard algorithms will go into the 'pylearn'
  project (toolbox style). Now that we have a stable foundation, we
  can reach a consensus on style for algorithms.