File: mmnotes.txt

package info (click to toggle)
metamath-databases 0.0.0~20210101.git55fe226-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, bullseye, sid
  • size: 48,584 kB
  • sloc: makefile: 7; sh: 6
file content (9166 lines) | stat: -rw-r--r-- 427,114 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
7279
7280
7281
7282
7283
7284
7285
7286
7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
7308
7309
7310
7311
7312
7313
7314
7315
7316
7317
7318
7319
7320
7321
7322
7323
7324
7325
7326
7327
7328
7329
7330
7331
7332
7333
7334
7335
7336
7337
7338
7339
7340
7341
7342
7343
7344
7345
7346
7347
7348
7349
7350
7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
7405
7406
7407
7408
7409
7410
7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
7432
7433
7434
7435
7436
7437
7438
7439
7440
7441
7442
7443
7444
7445
7446
7447
7448
7449
7450
7451
7452
7453
7454
7455
7456
7457
7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
7492
7493
7494
7495
7496
7497
7498
7499
7500
7501
7502
7503
7504
7505
7506
7507
7508
7509
7510
7511
7512
7513
7514
7515
7516
7517
7518
7519
7520
7521
7522
7523
7524
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534
7535
7536
7537
7538
7539
7540
7541
7542
7543
7544
7545
7546
7547
7548
7549
7550
7551
7552
7553
7554
7555
7556
7557
7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
7578
7579
7580
7581
7582
7583
7584
7585
7586
7587
7588
7589
7590
7591
7592
7593
7594
7595
7596
7597
7598
7599
7600
7601
7602
7603
7604
7605
7606
7607
7608
7609
7610
7611
7612
7613
7614
7615
7616
7617
7618
7619
7620
7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631
7632
7633
7634
7635
7636
7637
7638
7639
7640
7641
7642
7643
7644
7645
7646
7647
7648
7649
7650
7651
7652
7653
7654
7655
7656
7657
7658
7659
7660
7661
7662
7663
7664
7665
7666
7667
7668
7669
7670
7671
7672
7673
7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
7694
7695
7696
7697
7698
7699
7700
7701
7702
7703
7704
7705
7706
7707
7708
7709
7710
7711
7712
7713
7714
7715
7716
7717
7718
7719
7720
7721
7722
7723
7724
7725
7726
7727
7728
7729
7730
7731
7732
7733
7734
7735
7736
7737
7738
7739
7740
7741
7742
7743
7744
7745
7746
7747
7748
7749
7750
7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
7793
7794
7795
7796
7797
7798
7799
7800
7801
7802
7803
7804
7805
7806
7807
7808
7809
7810
7811
7812
7813
7814
7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
7835
7836
7837
7838
7839
7840
7841
7842
7843
7844
7845
7846
7847
7848
7849
7850
7851
7852
7853
7854
7855
7856
7857
7858
7859
7860
7861
7862
7863
7864
7865
7866
7867
7868
7869
7870
7871
7872
7873
7874
7875
7876
7877
7878
7879
7880
7881
7882
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
8008
8009
8010
8011
8012
8013
8014
8015
8016
8017
8018
8019
8020
8021
8022
8023
8024
8025
8026
8027
8028
8029
8030
8031
8032
8033
8034
8035
8036
8037
8038
8039
8040
8041
8042
8043
8044
8045
8046
8047
8048
8049
8050
8051
8052
8053
8054
8055
8056
8057
8058
8059
8060
8061
8062
8063
8064
8065
8066
8067
8068
8069
8070
8071
8072
8073
8074
8075
8076
8077
8078
8079
8080
8081
8082
8083
8084
8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
8106
8107
8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
8156
8157
8158
8159
8160
8161
8162
8163
8164
8165
8166
8167
8168
8169
8170
8171
8172
8173
8174
8175
8176
8177
8178
8179
8180
8181
8182
8183
8184
8185
8186
8187
8188
8189
8190
8191
8192
8193
8194
8195
8196
8197
8198
8199
8200
8201
8202
8203
8204
8205
8206
8207
8208
8209
8210
8211
8212
8213
8214
8215
8216
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
8236
8237
8238
8239
8240
8241
8242
8243
8244
8245
8246
8247
8248
8249
8250
8251
8252
8253
8254
8255
8256
8257
8258
8259
8260
8261
8262
8263
8264
8265
8266
8267
8268
8269
8270
8271
8272
8273
8274
8275
8276
8277
8278
8279
8280
8281
8282
8283
8284
8285
8286
8287
8288
8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
8309
8310
8311
8312
8313
8314
8315
8316
8317
8318
8319
8320
8321
8322
8323
8324
8325
8326
8327
8328
8329
8330
8331
8332
8333
8334
8335
8336
8337
8338
8339
8340
8341
8342
8343
8344
8345
8346
8347
8348
8349
8350
8351
8352
8353
8354
8355
8356
8357
8358
8359
8360
8361
8362
8363
8364
8365
8366
8367
8368
8369
8370
8371
8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
8409
8410
8411
8412
8413
8414
8415
8416
8417
8418
8419
8420
8421
8422
8423
8424
8425
8426
8427
8428
8429
8430
8431
8432
8433
8434
8435
8436
8437
8438
8439
8440
8441
8442
8443
8444
8445
8446
8447
8448
8449
8450
8451
8452
8453
8454
8455
8456
8457
8458
8459
8460
8461
8462
8463
8464
8465
8466
8467
8468
8469
8470
8471
8472
8473
8474
8475
8476
8477
8478
8479
8480
8481
8482
8483
8484
8485
8486
8487
8488
8489
8490
8491
8492
8493
8494
8495
8496
8497
8498
8499
8500
8501
8502
8503
8504
8505
8506
8507
8508
8509
8510
8511
8512
8513
8514
8515
8516
8517
8518
8519
8520
8521
8522
8523
8524
8525
8526
8527
8528
8529
8530
8531
8532
8533
8534
8535
8536
8537
8538
8539
8540
8541
8542
8543
8544
8545
8546
8547
8548
8549
8550
8551
8552
8553
8554
8555
8556
8557
8558
8559
8560
8561
8562
8563
8564
8565
8566
8567
8568
8569
8570
8571
8572
8573
8574
8575
8576
8577
8578
8579
8580
8581
8582
8583
8584
8585
8586
8587
8588
8589
8590
8591
8592
8593
8594
8595
8596
8597
8598
8599
8600
8601
8602
8603
8604
8605
8606
8607
8608
8609
8610
8611
8612
8613
8614
8615
8616
8617
8618
8619
8620
8621
8622
8623
8624
8625
8626
8627
8628
8629
8630
8631
8632
8633
8634
8635
8636
8637
8638
8639
8640
8641
8642
8643
8644
8645
8646
8647
8648
8649
8650
8651
8652
8653
8654
8655
8656
8657
8658
8659
8660
8661
8662
8663
8664
8665
8666
8667
8668
8669
8670
8671
8672
8673
8674
8675
8676
8677
8678
8679
8680
8681
8682
8683
8684
8685
8686
8687
8688
8689
8690
8691
8692
8693
8694
8695
8696
8697
8698
8699
8700
8701
8702
8703
8704
8705
8706
8707
8708
8709
8710
8711
8712
8713
8714
8715
8716
8717
8718
8719
8720
8721
8722
8723
8724
8725
8726
8727
8728
8729
8730
8731
8732
8733
8734
8735
8736
8737
8738
8739
8740
8741
8742
8743
8744
8745
8746
8747
8748
8749
8750
8751
8752
8753
8754
8755
8756
8757
8758
8759
8760
8761
8762
8763
8764
8765
8766
8767
8768
8769
8770
8771
8772
8773
8774
8775
8776
8777
8778
8779
8780
8781
8782
8783
8784
8785
8786
8787
8788
8789
8790
8791
8792
8793
8794
8795
8796
8797
8798
8799
8800
8801
8802
8803
8804
8805
8806
8807
8808
8809
8810
8811
8812
8813
8814
8815
8816
8817
8818
8819
8820
8821
8822
8823
8824
8825
8826
8827
8828
8829
8830
8831
8832
8833
8834
8835
8836
8837
8838
8839
8840
8841
8842
8843
8844
8845
8846
8847
8848
8849
8850
8851
8852
8853
8854
8855
8856
8857
8858
8859
8860
8861
8862
8863
8864
8865
8866
8867
8868
8869
8870
8871
8872
8873
8874
8875
8876
8877
8878
8879
8880
8881
8882
8883
8884
8885
8886
8887
8888
8889
8890
8891
8892
8893
8894
8895
8896
8897
8898
8899
8900
8901
8902
8903
8904
8905
8906
8907
8908
8909
8910
8911
8912
8913
8914
8915
8916
8917
8918
8919
8920
8921
8922
8923
8924
8925
8926
8927
8928
8929
8930
8931
8932
8933
8934
8935
8936
8937
8938
8939
8940
8941
8942
8943
8944
8945
8946
8947
8948
8949
8950
8951
8952
8953
8954
8955
8956
8957
8958
8959
8960
8961
8962
8963
8964
8965
8966
8967
8968
8969
8970
8971
8972
8973
8974
8975
8976
8977
8978
8979
8980
8981
8982
8983
8984
8985
8986
8987
8988
8989
8990
8991
8992
8993
8994
8995
8996
8997
8998
8999
9000
9001
9002
9003
9004
9005
9006
9007
9008
9009
9010
9011
9012
9013
9014
9015
9016
9017
9018
9019
9020
9021
9022
9023
9024
9025
9026
9027
9028
9029
9030
9031
9032
9033
9034
9035
9036
9037
9038
9039
9040
9041
9042
9043
9044
9045
9046
9047
9048
9049
9050
9051
9052
9053
9054
9055
9056
9057
9058
9059
9060
9061
9062
9063
9064
9065
9066
9067
9068
9069
9070
9071
9072
9073
9074
9075
9076
9077
9078
9079
9080
9081
9082
9083
9084
9085
9086
9087
9088
9089
9090
9091
9092
9093
9094
9095
9096
9097
9098
9099
9100
9101
9102
9103
9104
9105
9106
9107
9108
9109
9110
9111
9112
9113
9114
9115
9116
9117
9118
9119
9120
9121
9122
9123
9124
9125
9126
9127
9128
9129
9130
9131
9132
9133
9134
9135
9136
9137
9138
9139
9140
9141
9142
9143
9144
9145
9146
9147
9148
9149
9150
9151
9152
9153
9154
9155
9156
9157
9158
9159
9160
9161
9162
9163
9164
9165
9166
mmnotes.txt - Notes
-------------------

These are informal notes on some of the recent proofs and other topics.



(7-Dec-2020) Partial unbundling of ax-7, ax-8, ax-9 (notes by Benoit Jubin)
---------------------------------------------------------------------------

This note discusses the recent partial unbundling of the axiom of
equality ax-7 and the predicate axioms ax-8 and ax-9 in set.mm.

The axiom of equality asserts that equality is a right-Euclidean binary relation
on variables:
  ax-7 |- ( x = y -> ( x = z -> y = z ) )

It can be weakened by adding a DV (disjoint variable) condition on x and y:
  ax7v |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y)

and this scheme can itself be weakened by adding extra DV conditions:
  ax7v1 |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y) , DV(x,z)
  ax7v2 |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y) , DV(y,z)

We prove, in ax7, that either ax7v or the conjunction of ax7v1 and ax7v2
(together with earlier axioms) suffices to recover ax-7.  The proofs are
represented in the following simplified diagram (equid is reflexivity and
equcomiv is unbundled symmetry):

                 --> ax7v1 --> equid --
                /                      \
ax-7 --> ax7v --                        --> equcomiv --> ax7
                \                      /
                 --> ax7v2 ------------

The predicate axioms ax-8 and ax-9 can be similarly weakened, and the proofs are
actually simpler, now that the equality predicate has been proved to be an
equivalence relation on variables.  This is a general result.  If an n-ary
predicate P is added to the langugage, then one has to add the following n
predicate axioms for P:
  ax-P1 |- ( x = y -> ( P(x, z_2, ..., z_n) -> P(y, z_2, ..., z_n) ) )
  ...
  ax-Pn |- ( x = y -> ( P(z_1, ..., z_{n-1}, x) -> P(z_1, ..., z_{n-1}, y) ) )

Any of these axioms can be weakened by adding the DV condition DV(x,y), and it
is also sufficient to replace it by the conjunction of the two schemes:
  ax-Piv1 |- ( x = y ->
         ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) ) , x fresh
  ax-Piv2 |- ( x = y ->
         ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) ) , y fresh

where "fresh" means "disjoint from all other variables".  The proof is similar
to ax8 and ax9 and simply consists in introducing a fresh variable, say t, and
from
  |- ( x = t -> ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., t, ..., z_n) ) )
  |- ( t = y -> ( P(z_1, ..., t, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) )
  |- ( x = y -> E. t ( x = t /\ t = y ) )
one can prove axPi.

Note that ax-7 can also be seen as the first predicate axiom for the binary
predicate of equality.  This is why it does not appear in Tarski's FOL system,
being a special case of his scheme ( x = y -> ( ph -> ps ) ) where ph is an
atomic formula and ps is obtained from ph by substituting an occurrence of x
for y.  The above paragraphs prove that this scheme can be weakened by adding
the DV condition DV(x,y).


===============================================================================


(21-Dec-2017) Processing of $[ ... $] file inclusions
-----------------------------------------------------

See also the following Google Group posts:

Description and example:
https://groups.google.com/d/msg/metamath/4B85VKSg4j4/8UrpcqR4AwAJ
"Newbie questions":
https://groups.google.com/d/msg/metamath/7uJBdCd9tbc/dwP2jQ3GAgAJ
"Condensed version of set.mm?":
https://groups.google.com/d/msg/metamath/3aSZ5D9FxZk/MfrFfGiaAAAJ
Original proposal:
https://groups.google.com/d/msg/metamath/eI0PE0nPOm0/8O9s1sGlAQAJ

(Updated 31-Dec-2017:  1. 'write source.../no_delete' is now 'write
source.../keep_includes'.   2. Added 'set home_directory' command; see
'help set home_directory'.)
(Updated 1-Jan-2018: Changed 'set home_directory' to 'set root_directory'.)
(Updated 1-Nov-2019: Added Google Group links above.)


1. Enhanced "write source" command
----------------------------------

The "write source" command in metamath.exe will be enhanced with a
"/split" qualifier, which will write included files separately.  The name
of the main (starting) file will be the "write source" argument (as it
is now), and the names of included files will be taken from the original
file inclusions.


2. New markup-type directives related to file inclusions
--------------------------------------------------------

Recall the file inclusion command, "$[ file.mm $]", given in the
Metamath spec.  The spec will be clarified so that, for basic .mm file
verification, this command should be ignored when it occurs inside of a
comment (and it should exist only at the outermost scope, as well).

The metamath.exe program will perform additional actions based on
special markup comments starting "Begin $[", "End $[", and "Skip $[".
These are not part of the Metamath spec and can be ignored by basic
verifiers.  The metamath.exe program allows the .mm file to be written
as a whole or to be split up into modules (with "write source ...
/split"), and this markup controls how the modules will be created.  In
particular, the markup allows us to go back and forth seamlessly between
split .mm files and a single unsplit .mm file.

These markup comments are normally created automatically whenever a .mm
file containing includes is written by "write source" without the
"/split" qualifier.  They can also be inserted by hand to delineate how
the .mm file should be split into modules.  They are converted back to
file inclusions when "write source" is used with the "/split" qualifier.

  "$( Begin $[ file.mm $] $)" - indicates where an included file starts
  "$( End $[ file.mm $] $)" - indicates where an included file ends
  "$( Skip $[ file.mm $] $)" - indicates there was a file inclusion
      at this location in the split files, that wasn't used because
      file.mm was already included earlier.

To summarize:  Split files will have only "$[ file.mm $]" inclusions,
like before.  An unsplit file will have only these three special comments.

(Per the Metamath spec, recall that when a file is included more than
once, only the first inclusion will happen with subsequent ones ignored.
This feature allows us to create subsections of a .mm file that are
themselves stand-alone .mm files.  We need the "Skip" directive to mark
the location of ignored inclusions.)

The "read" command will accept either a single file or split files or
any combination (e.g. when the main file includes a file that originally
also contained includes but was separately written without "/split").
Files can contain any combination of inclusion directives $[ $] and the
3 special comments, except that each "$( Begin $[..." must have a
matching "$( End $[...".


3. Behavior of "read" command
-----------------------------

The "read" command builds an internal buffer corresponding to an unsplit
file.  If "write source" does not have the "/split" qualifier, this
buffer will become the new source file.

When "read" encounters an inclusion command or one of the 3 special
comments, the following actions are taken:

Case 3.1:
---------

"$[ file.mm $]"

  If file.mm has not already been included, this directive will
  be replaced with
  "$( Begin $[ file.mm $] $) <file.mm content> $( End $[ file.mm $] $)"
  If file.mm doesn't exist, an error will be reported.

  If file.mm has already been included, this directive will
  be replaced with "$( Skip $[ file.mm $] $)".

Case 3.2:
---------

"$( Begin $[ file.mm $] $) <file.mm content> $( End $[ file.mm $] $)"

  If file.mm has not already been included, this directive will
  be left alone i.e. will remain
  "$( Begin $[ file.mm $] $) <file.mm content> $( End $[ file.mm $] $)"

  If file.mm has already been included, this directive including the
  <file.mm content> will be replaced with "$( Skip $[ file.mm $] $)".
  Before discarding it, <file.mm content> will be compared to the content
  of file.mm previously included, and if there is a mismatch, a warning
  will be reported.

Case 3.3:
---------

"$( Skip $[ file.mm $] $)"

  If file.mm has not already been included, this directive will
  be replaced with
  "$( Begin $[ file.mm $] $) <file.mm content> $( End $[ file.mm $] $)".
  If file.mm doesn't exist, an error will be reported.

  If file.mm has already been included, this directive will
  be left alone i.e. will remain "$( Skip $[ file.mm $] $)".

Error handling:
---------------

Any comments that don't exactly match the 3 patterns will be silently
ignored i.e. will act as regular comments.  For example,
"$( Skip $[ file.mm $)" and "$( skip $[ file.mm $] $)" will act as
ordinary comments.  In general, it is difficult to draw a line between
what is a comment and what is a markup with a typo, so we take the most
conservative approach of not tolerating any deviation from the patterns.
This shouldn't be a major problem because most of the time the markup
will be generated automatically.

Error messages will be produced for "$( Begin $[.." without a matching
"$( End $[..."  (i.e. with the same file name) and for included files
that are missing.


4. Behavior of "write source" command
-------------------------------------

When "write source" is given without the "/split" qualifier, the
internal buffer (as described above) is written out unchanged.  When
accompanied by the "/split" qualifier, the following actions are taken.

Case 4.1:
---------

"$[ file.mm $]"

  (This directive should never exist in the internal buffer unless
  there is a bug.)

Case 4.2:
---------

"$( Begin $[ file.mm $] $) <file.mm content> $( End $[ file.mm $] $)"

  file.mm will be created containing "<file.mm content>".  The directive
  will be changed to "$[ file.mm $]" in the parent file.

Case 4.3:
---------

"$( Skip $[ file.mm $] $)"

  This directive will be changed to "$[ file.mm $]" in the
  parent file.


5. File creation and deletion
-----------------------------

When "write source" is used with "/split", the main file and all
included files (if they exist) will be overwritten.  As with the
existing "write source", the old versions will be renamed with a "~1"
suffix (and any existing "~1" renamed to "~2" and so on through "~9",
whose existing version will be deleted).

When "write source" is used without "/split", the main file will be
overwritten, and any existing included files will be deleted.  More
precisely, by "deleted" we mean that an existing included file will be
renamed to "~1", any existing "~1" renamed to "~2", etc. until "~9",
which will be actually deleted.  The purpose of doing this is to prevent
accidental edits of included files after the main file is written
without "/split" and thus causing confusing diverging versions.

A new qualifier, "/no_versioning", will be added to "write source" to
turn off the "~n" versioning if it isn't wanted.  (Personally,
versioning has helped me recover from mistakes, and it's easy enough to
"rm *~*" at the end of a work session.)

Another new qualifier, "/keep_includes", will be added to "write source" to
turn off the file deletion when "/split" is not specified.  This can be
useful in odd situations.  For example, suppose main.mm includes abc.mm
and (stand-alone) def.mm, and def.mm also includes abc.mm.  When writing
out def.mm without "/split", by default abc.mm will be deleted, causing
main.mm to fail.  (Another way to recover is to rewrite def.mm with
"/split".  Or recover from abc.mm~1.)


5. Comments inside of includes
------------------------------

A comment inside of a file inclusion, such as
"$[ file.mm $( pred calc $) $]", will be silently deleted when it is
converted to the nonsplit version e.g. "$( Skip $[ file.mm $] $)".
Instead, put the comment before or after the inclusion, such as
"$[ file.mm $] $( pred calc $)".


6. Directories
--------------

Officially, directories aren't supported.  In practice, an included file
in a subdirectory can be specified by its path relative to the current
working directory (the directory from which metamath.exe is launched).
However, it is strongly recommended to use "-" in the file name rather
than directory levels, e.g. set-mbox-nm.mm, and this will be a
requirement for set.mm at this time.

A new command was added to change the working directory assumed by the
program.  See 'help set root_directory'.

Therefore, if included files are present, you shouldn't read set.mm from
another directory with a command such as "read test/set.mm", because
included files will _not_ be assumed to be in test/.  Instead, you
should either launch metamath.exe from the test/ directory, or you
should 'set root_directory test' so that you can type "read set.mm".
(Usually the error messages will let you know right away when your
included files aren't found where expected.)

The reason we don't just extract and use the "test/" prefix of set.mm
automatically is that if we decide to support directories relative to
the root directory, it will be legitimate to "read mbox/mbox-nm.mm",
where mbox/ is a project subdirectory under the root directory.

(End of "(21-Dec-2017) Proposed pocessing of $[ ... $] file inclusions")



(14-May-2017) Dates in set.mm
-----------------------------

Dates below proofs, such as "$( [5-Nov-2016] )$", are now ignored by
metamath.exe (version 0.143, 14-May-2017).  Only the dates in
"(Contributed by...", "(Revised by...", and "(Proof shortened by...)"
are used for the Most Recent Proofs page and elsewhere.

If a "(Contributed by...)" markup tag is not present in a theorem's
comment _and_ the proof is complete, then "save new_proof" or
"save proof" will add "(Contributed by ?who?, dd-mmm-yyyy.)" to the
theorem's comment, where dd-mmm-yyyy is today's date.

You can either change the "?who?" to your name in an editor, or you can
use the new command "set contributor" to specify it before "save
new_proof" or "save proof".  See "help set contributor".

If you are manually pasting proofs into set.mm, say from mmj2, then at
the end of the day you can run "save proof */compressed/fast" to add
missing contributor dates, followed by a global replacement of "?who?"
with your name.

"verify markup" in Version 0.143 includes some additional error checking,
which will cause warnings on older versions of set.mm.  However, it
no longer checks the dates below proofs.

The dates below proofs will be deleted soon in set.mm.  If someone
is using them outside of the metamath program, let me know so I can
postpone the deletion.  The old code to check them can be re-enabled
by uncommenting "#define DATE_BELOW_PROOF" in mmdata.h.

For converting old .mm files to the new "(Contributed by...)" tag, the
program has the following behavior:  the date used is the (earlier) date
below the proof if it exists, otherwise it is today's date.  Thus an old
.mm file can be converted with "read xxx.mm", "save proof
*/compressed/fast", and "write source xxx.mm".  Note that if there are
two dates below the proof, the second one is used, and the first one is
intended for a "(Revised by...)" or "(Proof shortened by...)" tag that
must be inserted by hand.  Searching for "] $) $( [" will identify cases
with two dates that must be handled with manual editing.

Tip:  if you want to revert to the old way of checking (and inserting)
dates below proofs, uncomment the "#define DATE_BELOW_PROOF" in mmdata.h
before compiling.


(11-May-2016) New markup for isolating statements and protecting proofs
-----------------------------------------------------------------------

(Updated 10-Jul-2016:  changed "show restricted" to "show discouraged";
added "set discouragement off"; see below.)

Two optional markup tags have been defined for use inside of
statement description comments:

  "(Proof modification is discouraged.)"
  "(New usage is discouraged.)"

The metamath program has been updated to discourage accidental proof
modification or accidental usage of statements with these tags.

These tags have been added to set.mm in the complex number construction,
axiomatic studies, and obsolete sections, as well as to specific
theorems that normally should be avoided or should not have their proof
changed for various reasons.  I also added them to some mathboxes (AS
and JBN) having theorems or notation that are unconventional.

Most users will never encounter the effect of "discouraged" tags since
they are in areas that are normally not touched or used.

"write source.../rewrap" will prevent the new tags from being broken
across lines.  This is intended to make editing tasks easier.  For
example, if you are doing a major revision to a "discouraged" section
such as the complex number construction, you can change the tags
temporarily (like changing "is discouraged" to "xx discouraged"
throughout the section) then change them back when done.

The following commands recognize "(Proof modification is discouraged.)":

  "prove", "save new_proof"

The following commands recognize "(New usage is discouraged.)":

  "assign", "replace", "improve", "minimize_with"

In the description below, the term "restricted" means a statement's
comment has one or both of the new tags.

Originally, I was going to ask an override question when encountering a
restricted statement, but I decided against that because prompts become
unpredictable, making the user's "flow" awkward and scripts more
difficult to write.  Instead, the user can specify "/override" in the
command to accomplish this.

A warning or error message is issued when there is a potential use or
modification of a restricted statement.  An error message means the
requested action wasn't performed (because the user didn't specify
"/override"), and a warning message means the action was performed but
the user should be aware that the action is "discouraged".  To make the
messages more visible, they have a blank line before and after them, and
they always begin with ">>>" for emphasis.

The behavior of individual commands is as follows.

  "prove" - Without "/override", will give an error message and prevent
  entering the Proof Assistant when a proof is restricted i.e. when the
  statement's comment contains "(Proof modification is discouraged.)".
  With "/override", a warning is issued, but the user may enter the Proof
  Assistant.

  "save new_proof" - Without "/override", will give an error message and
  prevent saving.  With "/override", a warning is issued, but the save is
  allowed.

  "assign", "replace" - Without /override, will give an error message and
  not allow an assignment with a restricted statement i.e. when the
  assigned statement's comment contains "(New usage is discouraged.).
  With /override, will give a warning message but will do the assignment.

  "improve", "minimize_with" - Without /override, will silently skip
  restricted statements during their scans.  With /override, will consider
  all statements and will give a warning if any restricted statements are
  used.

Here is an example of a session with idALT, which has the comment tag
"(Proof modification is discouraged.)".

    MM> prove idALT

    >>> ?Error: Modification of this statement's proof is discouraged.
    >>> You must use PROVE ... / OVERRIDE to work on it.

    MM> prove idALT/override
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
    You will be working on statement (from "SHOW STATEMENT idALT"):
    101 idALT $p |- ( ph -> ph ) $= ... $.
    Note:  The proof you are starting with is already complete.

    >>> ?Warning: Modification of this statement's proof is discouraged.

    MM-PA> minimize_with id
    Bytes refer to compressed proof size, steps to uncompressed length.
    Proof of "idALT" decreased from 51 to 9 bytes using "id".
    MM-PA> save new_proof/compressed

    >>> ?Error: Attempt to overwrite a proof whose modification is discouraged.
    >>> Use SAVE NEW_PROOF ... / OVERRIDE if you really want to do this.

    MM-PA> save new_proof/compressed/override

    >>> ?Warning: You are overwriting a proof whose modification is discouraged.

    The new proof of "idALT" has been saved internally.
    Remember to use WRITE SOURCE to save changes permanently.
    MM-PA>

Here is an example of a session trying to use re1tbw2 (ax-1 twin) which
has "(New usage is discouraged.)".

    MM> prove r19.12
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
    You will be working on statement (from "SHOW STATEMENT r19.12"):
    $d x y $.  $d y A $.  $d x B $.
    7283 r19.12 $p |- ( E. x e. A A. y e. B ph -> A. y e. B E. x e. A ph ) $= ...
          $.
    Note:  The proof you are starting with is already complete.
    MM-PA> show new_proof /from 70/to 70
     70     ralrimi.2=ax-1    $a |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e.
                                                                 A A. y e. B ph ) )
    MM-PA> delete step 70
    A 12-step subproof at step 70 was deleted.  Steps 70:112 are now 59:101.
     59        ralrimi.2=?       $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x
                                                              e. A A. y e. B ph ) )
    MM-PA> assign last re1tbw2

    >>> ?Error: Attempt to assign a statement whose usage is discouraged.
    >>> Use ASSIGN ... / OVERRIDE if you really want to do this.

    MM-PA> assign last re1tbw2/override

    >>> ?Warning: You are assigning a statement whose usage is discouraged.

    MM-PA> undo
    Undid:  ASSIGN LAST re1tbw2 / OVERRIDE
     59        ralrimi.2=?       $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x
                                                              e. A A. y e. B ph ) )
    MM-PA> improve all
    A proof of length 12 was found for step 59.
    Steps 59 and above have been renumbered.
    CONGRATULATIONS!  The proof is complete.  Use SAVE NEW_PROOF to save it.
    Note:  The Proof Assistant does not detect $d violations.  After saving
    the proof, you should verify it with VERIFY PROOF.

Note that "improve all", which scans backwards, skipped over re1tbw2 and
picked up ax-1:

    MM-PA> show new_proof /from 70/to 70
     70     ralrimi.2=ax-1    $a |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e.
                                                                 A A. y e. B ph ) )
    MM-PA> undo
    Undid:  IMPROVE ALL
     59        ralrimi.2=?       $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x
                                                              e. A A. y e. B ph ) )

With "/override", it does not skip re1tbw2 but assigns it since it is the
first match encountered (before ax-1 in the backward scan):

    MM-PA> improve all/override

    >>> ?Warning:  Overriding discouraged usage of statement "re1tbw2".

    A proof of length 12 was found for step 59.
    Steps 59 and above have been renumbered.
    CONGRATULATIONS!  The proof is complete.  Use SAVE NEW_PROOF to save it.
    Note:  The Proof Assistant does not detect $d violations.  After saving
    the proof, you should verify it with VERIFY PROOF.
    MM-PA>


If you want to see which statements in a specific section have
restrictions, use "search.../comment" e.g.

    MM> search ax-1~bitr 'is discouraged'/comment
    101 idALT $p "...uted by NM, 5-Aug-1993.) (Proof modification is discouraged.)"
    659 dfbi1gb $p "...ry Bush, 10-Mar-2004.) (Proof modification is discouraged.)"

Program note:  The new markup tags are looked up via the function
getMarkupFlag() in mmdata.c.  Since string searches are slow, the result
of the first search in each statement comment is memoized (saved) so
that subsequent searches can be effectively instant.

Two commands were added primarily for database maintenance:


"show discouraged" will list all of the
statements with "is discouraged" restrictions and their uses in the
database (in case of discouraged usage) or the number of steps (in case
of a proof whose modification is discouraged).  It is verbose and
primarily intended to assist a script to compare a modified database
with an earlier version.  It will not be of interest to most users.

    MM> help show discouraged
    Syntax:  SHOW DISCOURAGED

    This command shows the usage and proof statistics for statements with
    "(Proof modification is discouraged.)" and "(New usage is
    discouraged.)" markup tags in their description comments.  The output
    is intended to be used by scripts that compare a modified .mm file
    to a previous version.

    MM> show discouraged
    ...
    SHOW DISCOURAGED:  Proof modification of "tru2OLD" is discouraged (9 steps).
    SHOW DISCOURAGED:  New usage of "tru2OLD" is discouraged (0 uses).
    SHOW DISCOURAGED:  New usage of "ee22" is discouraged (2 uses).
    SHOW DISCOURAGED:  "ee22" is used by "ee21".
    SHOW DISCOURAGED:  "ee22" is used by "ee33".
    ...

"set discouragement off" will turn off the blocking of commands caused
to "...is discouraged" markup tags.  It does the equivalent of always
specifying "/override" on those commands.  It is intended as a
convenience during maintenance of a "discouraged" area of the database
that the user is very familiar with, such as the construction of complex
numbers.  It is not recommended for most users.

    MM> help set discouragement
    Syntax:  SET DISCOURAGEMENT OFF or SET DISCOURAGEMENT ON

    By default this is set to ON, which means that statements whose
    description comments have the markup tags "(New usage is discouraged.)"
    or "(Proof modification is discouraged.)" will be blocked from usage
    or proof modification.  When this setting is OFF, those actions are no
    longer blocked.  This setting is intended only for the convenience of
    advanced users who are intimately familiar with the database, for use
    when maintaining "discouraged" statements.  SHOW SETTINGS will show you
    the current value.

    MM> set discouragement off
    "(...is discouraged.)" markup tags are no longer honored.

    >>> ?Warning: This setting is intended for advanced users only.  Please turn
    >>> it back ON if you are not intimately familiar with this database.

    MM> set discouragement on
    "(...is discouraged.)" markup tags are now honored.



(10-Mar-2016) metamath program version 0.125
--------------------------------------------

The following changes were made:

1. A new qualifier, '/fast', was added to 'save proof' and 'show proof'.
See the 9-Mar-2016 entry below for an application.

2. Long formulas are no longer wrapped by 'write source.../rewrap' but
should be wrapped by hand to fit in less than 80 columns.  The wrapping
was removed because a human can better judge where to split formulas for
readability.  Comments and indentation are still reformatted as before.

3. Added space between adjacent "}" and "{" in the HTML output.

4. A bug in the /explicit/packed proof format was fixed.  See 'help save
proof' for a list of all formats.

5. To reference a statement by statement number in 'show statement',
'show proof', etc., prefix the number with "#".  For example, 'show
statement #58' will show a1i.  This was added to assist program
debugging but may occasionally be useful for other purposes.  The
complete list of statement lookup formats in shown in 'help search'.





(9-Mar-2016) Procedure to change a variable name in a theorem
--------------------------------------------------------------

The metamath program has been updated in Version 0.125 (10-Mar-2016)
with a new qualifier, '/fast', that merely changes the proof format
without compressing or uncompressing the proof.  This makes format
conversions very fast for making database changes.  The format of the
entire database can be changed from /compressed to /explicit/packed, and
vice-versa, in about a minute each way.

The /explicit/packed format is described here:
https://groups.google.com/d/msg/metamath/xCUNA2ttHew/RXSNzdovBAAJ
You can also look at 'help save proof' in the metamath program.

The basic rules we will be using are:

1. When the proofs are saved in explicit format, you can change $f and
   $e order.

2. When the proofs are saved in compressed format, you can change the
   name of a variable to another if the two variables have adjacent $f's.


======= The conversion procedure: =======

To retrofit the new symbol variables added to set.mm to your theorems,
for example changing "P" to ".+", you can use the following procedure.

First, save all proofs in explict format:

./metamath set.mm
MM> save proof */explicit/packed/fast
MM> write source set.mm
MM> exit

    (Hint: 'save proof *' lists all proofs it is saving.  To suppress this
    output, type "q" at the scrolling question after the first page.  It will
    not really quit; instead, the proof saving will complete silently.)

Next, edit set.mm to place the $f for the "P" adjacent to (either immediately
before or immediately after) the $f for the ".+".  Then resave the proofs
in compressed format:

./metamath set.mm
MM> save proof */compressed/fast
MM> write source set.mm
MM> exit

In your text editor, substitute P for .+ in the theorem you want to
change.  Make sure you include the $p and any $d and $e statements
associated with the theorem.  If the $d and $e statements affect other
theorems in the same block, you will also have to make the P to .+
substitution in those $p's as well.

You are now done with the change.  However, you probably want to restore
the original $f order to make your database compatible with the standard
set.mm.  You can postpone doing this until you have finished making all
of your variable name changes as above.  First, save all proofs in
explict format:

./metamath set.mm
MM> save proof */explicit/packed/fast
MM> write source set.mm
MM> exit

Next, edit set.mm to restore the original $f order. It may be easiest
just to copy and paste the $f section from the standard set.mm.

Finally, you probably want to save all proofs in compressed format
since the file size will be smaller and easier to work with:

./metamath set.mm
MM> save proof */compressed/fast
MM> write source set.mm
MM> exit




(28-Feb-2015) Stefan O'Rear's notes on recent proofs
----------------------------------------------------

# New definitions

df-har: The Hartogs function, which restricts to cardinal successor on initial
ordinals.  Since the latter is somewhat important in higher set theory, I
expect this to get used a bit.  One question about the math symbol: standard
notation for ( har ` x ) is \aleph(x), but I didn't want to overlap the math
symbol used for df-aleph, so I left this as Latin text for now.

df-nzr: Nonzero rings:  A number of properties of linear independence fail for
the zero ring, so I gave a name to all others.

df-wdom: Weak dominance, i.e. dominance considered using onto functions instead
of 1-1.  The starred symbol seems to be relatively standard.  I'm quite pleased
with how ~ wdomd turned out.

df-lindf,df-linds: Definition of a linearly independent family resp. set of
vectors in a module.  Was initially trying to do this with just one definition
but two seems to work much better with corner cases.

# Highlights

hsmex: The class of sets hereditarily smaller than a given set exists; a
formalization of the proof in
http://math.boisestate.edu/~holmes/holmes/hereditary.pdf .  With AC this is
simpler as it follows from the existence of arbitrarily large regular
cardinals.  Intermediate steps use Hartogs numbers and onto functions quite
heavily, so df-har and df-wdom were added to support this; it also uses
iterated unions (ituni*) and order types (otos*), but the former is not a
standard concept and the latter has several definitional issues, so both are
temporary definitions for now.

marypha1,marypha2: P. Hall's marriage theorem, a surprisingly annoying
combinatorial result which is in the dependency chain for the vector space
dimension theorem.

kelac2,dfac21i: Recover the axiom of choice from Tychonoff's theorem (which we
don't have yet).  Required a number of additional results on box-shaped subsets
of cross products, such as boxriin and boxcuts.

In an attempt to unify the empty and nonempty cases, we are now considering
"relative intersections" of the form ( A i^i |^| B ) and ( A i^i |^|_ x e. I B
), assuming that all elements of B are subsets of A, we get the type-theoretic
behavior that an empty intersection is the domain of discourse of a particular
structure, and not the class _V.  Several theorems are added to support this
usage, and the related ( fi ` ( { A } u. B ) ).

lmisfree: There have been a number of recent questions about the correctness of
various definitions of basis.  This hopefully helps to clarify the situation by
proving my earlier conjecture that what we have is exactly what is needed to
witness an isomorphism onto a free module.  Notable intermediate steps here are
islbs (splits our notion of basis into the spanning and independence parts),
islindf4 ("no nontrivial representations of zero"), and lbslcic (only the
cardinality of the index set matters).

Along the way this required separating independent sets and families from the
previous notion of bases; a new "independent sets and families" section
contains the basic properties there.

domunfican: A cancellation law for cardinal arithmetic which came up in
marypha1 but may be of independent interest.

# Possible future directions

Probably a lot of polynomial stuff soon, with maybe a vector space dimension
theorem thrown in for good measure.  A rough priority order, and less final the
farther you go:

1. Fraction ring/field development; rational function fields

2. Recursive decomposition of polynomials

3. AA = ( CCfld IntgRing QQ) and re-re-define _ZZ = ( CCfld IntgRing ZZ ) $.

4. Relating integral elements to finitely generated modules.

5. Hilbert basis theorem

6. Integral closures are rings; aaaddcl/aamulcl.

7. Cowen-Engeler lemma (Schechter UF2); finite choice principles from
ultrafilters

8. M. Hall's marriage theorem

9. The next natural property of independent sets: the exchange theorem and the
finite dimension theorem for vector spaces

10. General dimension theorem from ultrafilters

11. Gauss' lemma on polynomials; ZZ = ( _ZZ i^i QQ )

12. Polynomial rings are UFDs




(17-Feb-2015) Mario Carneiro's notes on recent proofs
-----------------------------------------------------

Notes:

ifbothda:  Common argumentation style for dealing with if, may be good
to know

disjen, disjenex:  When constructing the reals, we needed extra elements
not in CC for +oo and -oo, and for that purpose we used ~P U. CC, ~P ~P
U. CC.  This works great if you only want a few new elements, but for
arbitrarily many elements this approach doesn't work.  So this theorem
generalizes this sort of construction to show that you can build an
arbitrarily large class of sets disjoint from a given base class A.

domss2, domssex2, domssex:  One application of disjen is that you can
turn any injection F :  A -1-1-> B around into G :  B -1-1-onto-> C
where A C_ C, and which is the identity on elements of A. I'm thinking
about taking advantage of this in the field extensions for Cp, since
this way you don't need a canonical injection into the extension but can
actually build the extension around the original field so that it is
literally a subfield of the extension.  It could also be used, if
desired, as a means of building compatible extensions in the
construction of the reals (so that om = NN0).

ghmker:  the kernel of a group homomorphism is a normal subgroup

crngpropd, subrgpropd, lmodpropd, lsspropd, lsppropd, assapropd:
property theorems

tgcnp, tgcn, subbascn:  checking continuity on a subbase or basis for a
topology

ptval:  Product topology.  Important theorems are pttop, ptuni, ptbasfi,
ptpjcn, ptcnp, ptcn.

pt1hmeo, ptunhmeo:  combining these judiciously allows you to show that
indexed product topologies are homeomorphic to iterated binary product
topologies, which for example can be used to prove ptcmpfi, which is
basically Tychonoff for finite index sets.

metdsval, metds0, metdseq0, metdscn:  Properties of the function d(x,A)
which gives the distance from a point to a nonempty set.

lebnum, lebnumii:  The Lebesgue number lemma, a nice result about open
covers in a metric space and a key step in the proof of the covering map
lifting theorem.

df-pi1:  Converted to structure builders, eliminated df-pi1b

pcopt2, pcorev2:  commuted versions of pcopt, pcorev.  Took me a while
to realize that left identity -> right identity and left inverse ->
right inverse does not actually follow from the other proofs, since at
this level we have only a groupoid, not a group.

pi1xfr:  A path induces a group isomorphism on fundamental groups.

df-dv:  Added a second argument to _D for the ambient space.  There is a
drop-in replacement from ( _D ` F ) to ( RR _D F ), and ( CC _D F ) is
the complex derivative of F. The left argument can be any subset of CC,
but since the derivative is not a function if S has isolated points, I
restrict this to S e. { RR , CC } for the primary convenience theorems.

df-lgs:  The Legendre symbol, a tour-de-force of something interesting
coming out of a ridiculous amount of case analysis.  This definition is
actually the Kronecker symbol, which extends the Jacobi symbol which
extends the Legendre symbol to all integers.  The main theorem to be
done in this area is of course the law of quadratic reciprocity, but
currently I'm stuck proving Euler's criterion which is waiting on
polynomials over Z/pZ.  So far the basic theorems prove that ( A /L N )
e. {-1,0,1}, ( A /L N ) =/= 0 iff A,N are coprime, and it is
distributive under multiplication in both arguments.

kur14:  The Kuratowski closure-complement theorem, which I mentioned in
another email.

df-pcon, df-scon:  Path connected spaces and simply connected spaces.  I
hope to use SCon to prove some kind of Cauchy integral theorem, but
we'll see.

txpcon, ptpcon:  products of path-connected spaces are path-connected
(ptpcon is actually an AC-equivalent)

pconpi1:  The fundamental groups of a path-connected space are
isomorphic

sconpi1:  A space is simply connected iff its fundamental group is
trivial

cvxpcon, cvxscon:  a convex subset of CC is simply connected blcvx,
blscon:  a disk in CC is convex and simply connected

df-cvm:  Definition of a covering map.

cvmlift:  The Path Lifting Theorem for covering maps

df-rpm, df-ufd:  define a prime element of a ring and a UFD

psr1val:  Basic theorems for univariate polynomials




(9-Jan-2015) mpbi*an* hypothesis order change
---------------------------------------------

At the suggestion of a couple of people, I changed the hypothesis order
in mpbi*an* (7 theorems) so that the major hypothesis now occurs last
instead of first, in order to make them less annoying to use.  The
theorems changed were:

  mpbiran mpbiran2 mpbir2an mpbi2and mpbir2and mpbir3an mpbir3and

This change affects over 1000 proofs.  The old versions are still there
suffixed with "OLD" and will remain for 1 year.

To update your mathbox etc.,

1. Make sure your mathbox is compatible with the the set.mm just prior
to this change, temporarily available here:

  http://us2.metamath.org:88/metamath/set.mm.2015-01-08.bz2

1. In a text editor, suffix all references to mpbi*an* in your proofs
with OLD (e.g. mpbiran to mpbiranOLD).  This will make your proofs
compatible with the current set.mm.

2. Update the current set.mm with your mathbox.

3. For each proof using mpbi*an*OLD, do the following in the metamath
program:

./metamath set.mm
...
MM> prove abc
MM-PA> minimize mpbi*an*/except *OLD/allow_growth
MM-PA> save new_proof/compressed
MM-PA> exit
..,
MM> write source set.mm
MM> exit


(11-Jun-2014) Mario Carneiro's revisions
----------------------------------------

Mario Carneiro finished a major revision of set.mm. Here are his notes:

    syl3anbrc, mpbir3and: simple logic stuff
    fvunsn: eliminate D e. _V hypothesis
    caoprcld, caoprassg, caoprcomg: deduction form
    winafpi: dedth demonstration
    avglt1, avglt2, avgle1, avgle2: average ordering theorem colleciton
    peano2fzr: recurring lemma
    fzfid: very common use due to deduction-form sum theorems
    seqcl2 thru seq1p: deduction form
    sercaopr2: the 'correct' general form of sercaopr
    seqz: absorbing element in a sequence (note that there are now theorems
    seq1 and seqz, since the old tokens are gone)

    cseq1 thru ser1add, seq1shftid, seqzm1 thru ser0p1i: deleted - all the
    theorems in this section have equivalents in the new seq section,
    although it is often a 4-5 to 1 mapping, since the deduction framework
    makes it easier to have ease of use and full generality at the same time

    df-exp: revised to include negative integer exponents. The theorems for
    nonnegative exponents are exactly the same as before, but for negative
    exponents ( A ^ -u N ) we need the assumption A=/=0 so that we don't
    divide by zero. Relevant new theorems are:
    expneg, expneg2, expnegz, expn1: definition of ( A ^ -u N ) is 1 /  ( A
    ^ N )
    expcl2lem: closure under positive and negative exponents
    rpexpcl: the only existing closure theorem that was changed as a result
    reexpclz, expclz: closure for reals and complexes to integer exponents,
    when the argument is nonzero
    1exp, expne0i, expgt0, expm1, expsub, expordi thru expword2i:
    generalized to negative exponents
    mulexpz, expaddz: the main "hard" theorems in this section, generalizing
    the exponent addition laws to integers. The main reason to keep both the
    new versions and the old (mulexp, expadd) is because the new theorems
    generalize the exponent at the cost of the extra hypothesis A =/= 0.

    discr, discr1: deduction form, generalized to all A e. RR, shortened proof
    rexanuz: the new upper integer form of cvganz, and used as the basis for
    most other manipulations on upper integer sets to replace cau* and cvg*
    theorems
    rexfiuz: finite set generalization of rexanuz
    rexuz3: turn an upper integer quantifier E. j e. ZZ A. k e. (ZZ>= ` j)
    into a restricted upper integer quantifier E. j e. NN A. k e. (ZZ>= ` j)
    rexanuz2: restricted upper integer quantifier form of rexanuz
    r19.29uz: looks sort of like r19.29, but for upper integer quantifiers
    r19.2uz: sort of like r19.2z
    cau3lem: useful on its own for being sufficiently general for both real
    cauchy sequences and cauchy sequences in metric spaces
    cau3: convert a cauchy sequence from two-quantifier form into
    three-quantifier form (this last one is useful because it is compatible
    with rexanuz2)
    cau4: use cau3 to change the base of the cauchy sequence definition
    caubnd2: a cauchy sequence is eventually bounded. This is sufficient for
    most proofs, but I went ahead and proved the original form caubnd as well
    caurei, cauimi, ser1absdifi: deleted since they weren't being used and
    there wasn't a good reason to reformulate them in view of the much more
    general theorems to come
    bcval5: write out the numerator of a binomial coefficient as a sequence
    with arbitrary start and end
    bcn2: N choose 2
    fz1iso: any finite ordered set (in particular, a finite set of integers)
    is isomorphic to a one-based finite sequence
    seqcoll: lemma for soundness of df-sum, stated for general sequences

    clim thru climcn2: deduction form
    addcn2, subcn2, mulcn2: the old approach was a bit backward, using a
    direct proof of continuity to prove that addition etc. is sequentially
    continuous, then using sequential continuity to prove continuity through
    bopcn, whose proof requires CC. To avoid this, now we prove continuity
    directly (writing out the definition since the other continuity theorems
    aren't ready yet) and use this to prove that limits are preserved in
    climadd etc.
    climadd thru serf0: deduction form

    df-sum: a new much more general definition of summation, allowing the
    index set to be either a finite set or a lower bounded subset of the
    integers
    sum2id: assume the argument to a sum is a set
    sumfc: change bound variable
    summo: a 'big' theorem, proving that the new definition is well-defined
    zsum, isum, fsum: the spiritual equivalents to the old dffsum, dfisum,
    showing the definition on upper integer subsets, upper integer sets, and
    nonempty finite sets, respectively.
    sum0: sum of the empty set is zero
    sumz: sum of zero on any summable index set is zero
    fsumf1o: finite sum is unchanged under a bijection
    sumss, fsumss, sumss2: add zeroes to a finite sum to enlarge the index set
    fsumcvg, fsumcvg2, fsumcvg3: a finite sum is convergent (useful for
    treating a finite sum using infinite sum theorems)
    fsumsers: sum over a subset of a finite sequence
    fsumser: sum over a finite sequence (this is the most direct equivalent
    to the old dffsum)
    fsumcl2lem: a set that is closed under addition is closed under nonempty
    finite sums
    fsumcllem: a set containing zero and closed under addition is closed
    under finite sums
    fsumcl, fsumrecl, fsumrpcl: closure under reals, complexes, positive reals
    sumsn, sumsns: sum over a singleton
    fsumm1: break off the last term (this is more general than fsump1,
    because this includes the case where the smaller sum is empty)
    isumclim, isumclim2: relation between infinite series and convergent
    sequences
    sumsplit: generalized for subsets of the integers (but not that useful
    in practice)
    fsum2d, fsumcom2: generalization of fsumcom to sum over non-rectangular
    regions
    fsumxp: sum over a cross product (this theorem has no equivalent under
    the old system)
    abscvgcvg: absolutely convergent implies convergent

    binom: shortened the proof
    divcnv: generalized reccnv for convenience
    arisum: now adds up 1...N instead of 0...N
    expcnv: shortened the proof
    geoser: now sums 0...N-1 since it makes the formula nicer
    cvgrat: shortened the proof
    fsum0diag: shortened the proof (corollary of fsumcom2)
    mertens: generalization of the old proof of efadd

    elcncf2: commuted arguments to elcncf
    cncfco: composition of continuous functions

    ivth, ivth2, ivthle2: shortened the proof (the 'le' version allows the
    value to be equal to one of the endpoints)
    df-tan, df-pi: revised to use shorter dummy-free expressions
    efcllem, efge2le3, efadd, eftlub, eirr, efcn, reeff1o: shortened the proof
    demoivre: swapped with *ALT version and extend to negative exponents
    acdc3lem thru acdcALT: deleted (use axdc* theorems)

    ruc: shortened the proof (yes, this is the second time I've shortened
    ruc. This time I followed the same approach as the original ruc, rather
    than the alternative proof via rpnnen.)

    dvdseq: simplifies some divisibility proofs
    bezout: imported from my mathbox
    dvdsgcd, mulgcd: new proof from bezout
    algrf thru eucalg: deduction form
    df-pc: imported prime count function from my mathbox
    1arith: new proof using the prime count function. This proof has a
    considerably different statement from the original proof, so it is
    perhaps debatable whether this is still the "fundamental theorem of
    arithmetic", but this is the easiest to prove given the tools already
    available, and it is also easier to use in future proofs (although the
    more direct statements pc11 and pcprod are probably more directly
    applicable).

    dscmet, dscopn: the discrete metric generates the discrete topology

    cnmptid thru cnmptcom: This new set of theorems, with names cnmpt*, is
    designed for use in quickly building up continuous functions expressed
    in the mapping notation. The naming convention takes the first number to
    be the number of arguments to the mapping function, and the second
    number is the number of arguments to the function that is applied at the
    top level. If the function that is applied is an atomic operation, a *f
    is suffixed. The "base case" continuous functions are given by cnmptc,
    cnmptid (for constants and the identity), and cnmpt1st, cnmpt2nd for
    two-argument mapping functions. sincn or ipcn provide a good demonstration.

    df-lm: the definition has been changed to represent convergence with
    respect to a topology, rather than a metric space. The old definition
    can be recovered as ( ~~>m ` D ) = ( ~~t ` ( MetOpen ` D ) ), although
    the functions are now required to be partial functions on CC rather than
    just subsets of ( X X. CC ).
    df-cau: the definition has been abbreviated, and the functions are again
    required to be partial functions on CC.
    df-cmet: the definition has been abbreviated
    lmbr, lmbr2, lmbrf: these have the same names as the old versions, but
    now apply to topological convergence; the metric convergence
    equivalents are now called lmmbr, lmmbr2, lmmbrf.
    lmbr thru iscauf: deduction form
    lmmo: was lmuni, now applies to hausdorff spaces instead of metric spaces
    lmcls, lmcld: was previously part of metelcls, but this direction
    doesn't need choice or metric spaces, so it is now separate
    lmcnp, lmcn, lmcn2: continuous functions preserve limits
    metdcn: a metric is a continuous function in its topology
    addcn, subcn, mulcn: modified to use topological product
    fsumcn, fsum2cn: adjusted for compatibility with cnmpt*
    expcn, divccn, sqcn: powers and division by a constant are continuous
    functions (there is still no proof that division is continuous in the
    second argument, but we haven't needed it yet).
    isgrp2d: deduction form of isgrp2i
    ghgrp, ghablo: deduction form of ghgrpi
    ghsubg, ghsubablo: deduction form of ghsubgi

    vacn, smcn, ipcn: generalized one-argument continuity proofs to joint
    continuity (and shortened the proof)
    minvec: shortened the proof

    sincn, coscn: shortened the proof
    pilem*: shortened the proof
    efhalfpi, efipi, sinq12ge0, cosq14ge0: more trig theorems
    sineq0: combined the old sineq0 with sineq0re and sinkpi
    coseq1, efeq1: similar to sineq0
    cosord: generalize cosh111 to [0, pi]
    recosf1o, resinf1o: useful for defining arcsin, arccos
    efif1o, eff1o: shortened the proof
    df-log: abbreviated definition, changed principal domain to (-pi, pi]
    instead of [-pi, pi) - the result is that now log(-1) = i pi rather than
    -i pi, in keeping with standard principal value definition
    relogrn, logrncn: closure for ran log
    logneg, logm1: log of a negative number
    explog, reexplog, relogexp: generalized to negative integer powers

    df-hlim, df-hcau: abbreviated definition
    hvsubass, hvsub32, his35: vector identities
    hlimadd: limit of a sequence of vector sums
    occon3: generalize chsscon2i and prove without CC
    shuni: generalize chocunii to any subspaces with trivial intersection
    pjthmo: the uniqueness part of pjth, which needs no choice
    occl: shortened the proof
    pjth: shortened the proof (from 45 lemmas to 2!) using minvec
    pjpreeq, pjpjpre: a weak version of pjeq that allows some usage of the
    projection operator without assuming the projection theorem
    chscl: the majority of the proof of osum is now here; this proof does
    not need CC even though osum does
    hhcno, hhcnf: relate ConOp and ConFn to the topology Cn predicate
    imaelshi, rnelshi: the image of a subspace under a linear function is a
    subspace
    hmopidmchi, hmopidmpji: the class abstraction { x e. ~H | ( T ` x ) = x
    } is the same as the range of T

    - My mathbox -
    bclbnd: deleted because it is obsolete
    efexple: generalized to N e. ZZ

    - Scott Fenton's mathbox -
    sinccvg: (named for the sinc function: sinc x = sin x / x) shortened the
    proof
    trirecip: shortened the proof
    df-bpoly: abbreviated definition and eliminated the if statement, since
    now you can sum the empty set
    bpolydif: shortened the proof

    - FL's mathbox -
    cmpbvb, fopab2ga, fopab2a, cmpran, riecb, riemtn, fopabco3, dffn5b:
    deleted as duplicates
    df-pro, df-prj: mapping definition
    ispr1, prmapcp2, valpr, isprj1, isprj2: changed P e. Q to P e. V
    prmapcp3: corollary of prmapcp2
    hbcp: imported as hbixp1
    cbicp: shortened the proof
    iserzmulc1b thru seq0p1g: deleted since seqz is gone (there are already
    equivalents for seq)
    df-prod (prod3_ token): inlined into the new df-prod (previously
    df-prod2), since it's not that helpful to have two definitions that are
    so similar
    (I don't really see the point of this definition to begin with - it's
    very similar in scope to what seq does. It's not possible to give it a
    finite sets definition like the new sum_ , since it is defined on
    arbitrary magmas, so it reduces to basically the same as seq, except
    that it is defined on the empty set as well. I thought about moving this
    definition to seq as well, say by having ( seq M ( P , F ) ` ( M - 1 ) )
    = ( Id ` P ), but this makes seqfn a bit asymmetrical and it can't
    really be used to generalize things like seqsplit because you'd want the
    original version anyway, and the sethood requirement is usually a
    distraction anyway.)
    prod0: renamed from valproemset (another indication that the extension
    to the empty set is not that useful is that this theorem is never used)
    prodeq2, prodeq3: exchanged these labels for consistency with ordering
    in the token and with prodeq2d, prodeq3d
    prodeqfv thru fprod2: miscellaneous updates for the modified definition
    clfsebs, fincmpzer, fprodadd, isppm, seqzp2, fprodneg, fprodsub:
    shortened the proof
    df-expsg, df-mmat: these definitions are never used, but I updated them
    to use seq and prod_ in place of seq1 and prod3_
    cmphmp, idhme, cnvhmph, hmphsyma, hmphre, hmeogrp, homcard, eqindhome:
    shortened the proof
    eltpt: imported
    ttcn, ttcn2: use cnmpt* instead
    exopcopn, topgrpsubcn, trhom, ltrhom, cntrset: shortened the proof
    uncon: this is a special case of iuncon - any collection of connected
    subsets that share a point is connected

    - Jeff Madsen's mathbox -
    acdcg, acdc3g, acdc5g: I deleted these as part of the acdc* cleanup, but
    the existing axdc* theorems assume the base set is a set in the
    hypotheses - if this becomes an issue later, these theorems should be
    updated, rather than making *g theorems. As it is, there is no need for
    them yet.
    sdc: shortened the proof
    seq1eq2: seq1 is going away, use seqfveq
    fsumlt, fsumltisum, fsumleisum: generalized to finite sets and imported
    csbrn, trirn, mettrifi: shortened the proof
    mettrifi2: this is basically the same as the new mettrifi
    geomcau, caushft, caures: deduction form, shortened the proof
    metdcn: imported and generalized to joint continuity

    cnmptre, iirevcn, iihalf1cn, iihalf2cn, iimulcn: continuity base cases
    for path homotopy proofs
    elii1, elii2: commonly used lemma in path homotopy proofs
    cncfco: imported
    cnimass: same as elsubsp2
    cnres: same as elsubsp
    cnmpt2pc: generalized piececn to two-arg functions and adjusted for
    compatibility with cnmpt*

    ishomeo2: this is the same as the new ishomeo
    hmeocn: corollary of hmeobc
    hmeocnv: the same as cnvhmpha
    ctlm thru lmtlm: deleted/imported this whole section because this is
    essentially the same as the new topological limit relation, and all
    theorems here are already represented in main set.mm now
    txcnoprab: the same as cnmpt2t
    txsubsp: imported
    cnresoprab thru cnoprab2c: this set of theorems is very similar to the
    new cnmpt* section (and indeed inspired that section), so they are
    deleted as duplicates
    txmet, txcc: imported
    addcntx thru mulcntx: the same as the new addcn thru mulcn

    bfp: shortened the proof
    df-rrn, rrnval: use mapping notation
    rrnmet, rrncms: shortened the proof

    df-phtpy, df-phtpc: use mapping notation
    phtpyfval thru phtpyco: deduction form, shortened the proof
    isphtpc2: merge with isphtpc
    reparpht: shortened the proof
    df-pco: use mapping notation, also extend the domain to functions that
    may not line up at the endpoints, for simplicity of definition (it
    doesn't affect pi1gp, because that is restricted further to functions
    with the same start and end point anyway)
    pcofval thru pco1: deduction form
    pcocn, pcohtpy, pcopt, pcoass, pcorev: shortened the proof
    pi1fval thru pi1val: deduction form
    pi1gp: shortened the proof



(4-Jan-2014) Wide text editor windows
-------------------------------------

Some people like to work with text editor windows wider than the 79
characters per line convention for set.mm.  The following commands:

  MM> set width 9999
  MM> save proof */compressed
  MM> set width 120
  MM> write source set.mm/rewrap

will put each proof on a single line and will wrap all $a and $p
descriptions at column 120.  If your text editor truncates the display
of lines beyond the end of screen, having each proof on a single line
will reduce scrolling.  (Note that 'save proof */compressed' takes 5-10
minutes depending on CPU speed.)

Repeat the above with 'set width 79' (the default width) to restore the
conventional widths.

Note that comments outside of $a and $p descriptions, such as section
headers, are not affected by 'write source.../rewrap'.  The math
formulas in $a and $p statements are also unchanged, since breaking and
indentation may have been chosen to emphasize the structure of the
formula (exception:  formulas exceeding the 'set width' are wrapped, but
not unwrapped with a larger 'set width').



(2-Dec-2013) Class substitution
-------------------------------

Has the time come to remove the "A e. _V" restriction on uses of df-sbc?

See the comments under df-sbc for the issues.  So far we have been
uncommitted about the behavior at proper classes so as not to conflict
with Quine.  To do this, we have prohibited the direct use of df-sbc and
instead only permitted use of dfsbcq.

However, this has become inconvenient, requiring annoying sethood
justifications for theorems using df-sbc and df-csb that make proofs
longer.

There is no theoretical reason not to define it any way we want for
proper classes, but there are two possible ways that we must choose
from.  We could allow the direct use of df-sbc (always false for proper
classes), which means for example that sbth would still require the
"A e. V" antecedent.  Or we could redefine it as in sbc6g (always true
for proper classes), which means some other kinds of theorems would
require the antecedent.  I'm not sure which is more advantageous, but
I'm inclined to choose df-sbc which seems more natural.  Any opinions
are welcome.



(1-Dec-2013) Definite description binder
----------------------------------------

I changed the symbol for the definite description binder in df-iota and
df-riota to an inverted iota, which was used by Russell and seems more
standard than regular iota.  It is analogous to the inverted A and E for
"for all" and "exists".  I left the tokens "iota" and "iota_" alone for
now, although they should be changed since it's no longer a true iota.
Ideally it should be changed to "i." in analogy with "A." and "E.", but
"i." is already used for the imaginary unit.  "ii" (inverted iota) is
one possibility.  Or maybe "io." since we use the first two letters,
like "ph", for Greek.  Suggestions are welcome.



(8-Oct-2013) Proof repair techniques
------------------------------------

If you discover you have been going down a wrong path while creating a
proof, in some cases there are techniques to help salvage parts that are
already proved so they don't have to be typed in again.  Let me know if
there is a useful technique you use, and I'll post it here.

(1) Jeff Hankins described the following technique he used:

  Here's a cool trick I found useful today.  The theorem dummylink can
  "save" people from incorrect theorem assignments.  Here's what I mean.
  I had used the theorem rcla42ev, but Metamath said that there was a
  disjoint variable violation, so I would have to delete that step.
  However, I had a decent-size proof under the incorrect step and I did
  not want to lose all that hard work.  Here's what I did:  I took the
  last step under rcla42ev and assigned it to a dummylink.  I improved the
  statement under the dummylink to save that part of the proof, then I
  deleted the rcla42ev and cleaned up the disjoint variable stuff.  Once I
  got to the part which I had the proof for already, I was able to improve
  it because the proof was in the dummylink.  After that, I deleted the
  use of dummylink and continued on as normal.  I still had to do all the
  annoying technical rcla4e substitution and bound variable steps, but at
  least I didn't have to do all the exponential and "if N is a natural
  number, N-1 is a nonnegative integer" stuff all over again.

Indeed, dummylink is probably underused as a tool to assist proof
building.  Its description in set.mm contains one suggested method for
how it can be used.  When the proof is complete, 'minimize_with' will
automatically reformat the proof to strip out the dummylink uses.


(2) I often use a quick-and-dirty script called extractproof.cmd that
creates a script to reproduce a proof.   extractproof.cmd contains
the following:

  set width 9999
  open log 1.tmp
  show new_proof
  close log
  set width 79
  tools
  match 1.tmp $ y
  clean 1.tmp ber
  substitute 1.tmp ' ' '%^' 1 ''
  add 1.tmp ! ''
  delete 1.tmp ^ =
  delete 1.tmp " " ""
  !substitute 1.tmp obs "" a ""
  substitute 1.tmp '%' '%assign last ' 1 ''
  add 1.tmp "" " /no_unify"
  reverse 1.tmp
  substitute 1.tmp % \n a ''
  quit

Here is an example of its use.  First, we extract the proof:

  MM> prove a1i
  Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
  You will be working on statement (from "SHOW STATEMENT a1i"):
  53 a1i.1 $e |- ph $.
  54 a1i $p |- ( ps -> ph ) $= ... $.
  Note:  The proof you are starting with is already complete.
  MM-PA> submit extractproof.cmd/silent

The generated script 1.tmp will contain these lines:

  !9
  assign last ax-mp /no_unify
  !8
  assign last ax-1 /no_unify
  !5
  assign last a1i.1 /no_unify

The step number comment corresponds to the 'show new_proof'
step.  The /no_unify prevents interactive unification while
the script is running.  Now let's test the generated script:

  MM-PA> delete all
  The entire proof was deleted.
  1 a1i=? $? |- ( ps -> ph )
  MM-PA> submit 1.tmp/silent
  MM-PA> unify all/interactive
  No new unifications were made.
  MM-PA> improve all
  A proof of length 1 was found for step 5.
  A proof of length 1 was found for step 4.
  A proof of length 3 was found for step 2.
  A proof of length 1 was found for step 1.
  Steps 1 and above have been renumbered.
  CONGRATULATIONS!  The proof is complete.

For proof repair, I often extract pieces of the generated script to
automatically reprove sections that are correct.  With experience, I've
learned a few tricks to deal with several problems.  For example, if the
proof had an unknown step when the script was generated, the script will
contain 'assign last ?', which must be manually deleted from the generated
script, and all subsequent 'assign last' must be changed to 'assign -1'.



(5-Oct-2013) Improvements in extensible structure utility theorems
------------------------------------------------------------------

I revamped the utility theorems used for extensible structures.  The
large collection of general structure theorems in the "Extensible
structures" section of set.mm have been reduced to just four:  strfvn
(normally not used), ndxarg, ndxid, and strfv.  I added comments to
these to assist understanding their purpose.

Before, it was awkward to work with structures with more than 3
components, since the number of utility theorems was O(N^2) where N is
the number of structure components.  In particular, the old O(N^2)
strNvM theorems have become the single theorem strfv.  To achieve this,
strfv has a hypothesis requiring that the _specific_ structure S be a
function.

Note that, as before, any particular member of say Grp (df-grp) needn't
be a function as long as it has the required values under our df-fv
definition.  (This "opening up" of Grp to possible non-functions
dramatically simplifies many extensible structure theorems and causes no
theoretical problem.)  So, strfv can't be used with an arbitrary member
of Grp.  But practically speaking, we will not use _specific_ structures
that aren't functions, and this limitation (which conforms to the
literature in any case) does not cause a problem.

Extending a structure with a new component (such as groups to rings) is
now O(N), requiring one new theorem per structure component (rngbase,
rngplusg, rngmulr).  In particular, the new theorem fnunsn can be used
to add a component to a previous structure.  Notice its use in building
a ring structure from a group structure in rngfn.

To prove that all structure indices are different, I now successively
show each member is not in the previous set of members rather than
having O(N^2) inequalities.  The largest application of this so far is
in phllem2 in my mathbox, used to prove phlfn with 8 components starting
from lvecfn with 6 components, which in turn comes from rngfn with 3
components.  This shows that an 8-component structure is now practical
to work with.



(8-Sep-2013) New df-seq (sequence generator) and df-sqr (square root)
---------------------------------------------------------------------

Mario Carneiro has updated the sequence generator with the definition,

    df-seq $a |- seq M ( P , F ) = ( rec (
        ( x e. _V , y e. _V |-> <. ( x + 1 ) , ( y P ( F ` ( x + 1 ) ) ) >. ) ,
          <. M , ( F ` M ) >. ) " om ) $.

This rec() is a function on all ordinals 0o, 1o, 2o,... consisting of
the ordered pairs

   { <. 0o , <. M , ( F ` M ) >. >. ,
     <. 1o , <. ( M + 1 ) , ( F ` ( M + 1 ) ) >. >. ,
     <. 2o , <. ( M + 2 ) , ( F ` ( M + 2 ) ) >. >. , ... }

When restricted to the finite ordinals (omega), its range is exactly the
sequence we want:

    { <. M , ( F ` M ) >. >. ,
      <. ( M + 1 ) , ( F ` ( M + 1 ) ) >. >. ,
      <. ( M + 2 ) , ( F ` ( M + 2 ) ) >. >. , ... }

So, we just extract the range and throw away the domain.

This is much simpler than the equivalent triple of previous definitions
for the same thing:

    df-seq1 $a |- seq1 = { <. <. f , g >. , h >. |
                   h = { <. x , y >. | ( x e. NN /\
                     y = ( 2nd ` ( ( rec ( { <. z , w >. |
                         w = <. ( ( 1st ` z ) + 1 ) ,
                           ( ( 2nd ` z ) f ( g ` ( ( 1st ` z ) + 1 ) ) ) >. } ,
                       <. 1 , ( g ` 1 ) >. ) o. `'
       ( rec ( { <. z , w >. | w = ( z + 1 ) } , 1 ) |` om ) ) ` x ) ) ) } } $.
    df-shft $a |- shift = { <. <. f , x >. , g >. | g =
                  { <. y , z >. | ( y e. CC /\ z = ( f ` ( y - x ) ) ) } } $.
    df-seqz $a |- seq = { <. <. x , g >. , h >. |
        h = ( ( ( ( 2nd ` x ) seq1 ( g shift ( 1 - ( 1st ` x ) ) ) )
           shift ( ( 1st ` x ) - 1 ) ) |` { k e. ZZ | ( 1st ` x ) <_ k } ) } $.

The new seq has arguments, unlike the old which was a constant class
symbol.  This allows proper classes for its arguments (in particular P
and F).  The "M" argument is visually separated from the other two since
it acts more like a parameter; it could correspond to a subscript in a
textbook e.g.  "seq_M(P,F)" in LaTeX.

Usually I prefer to have new definitions in the form of new class
constant symbols (thus requiring no new equality, hb*, etc. theorems),
but we made this an exception since Mario wants to use proper classes
for P and M, and also the old seq was rather awkward looking:

  old:  ( <. M , P >. seq F )
  new:  seq M ( P , Q )

The relationship to the old seq0, seq1, and seq (now seqz) are given
by the following theorems:

  seq0fval $p |- ( S seq0 F ) = seq 0 ( S , F )
  seq1fval $p |- ( S seq1 F ) = seq 1 ( S , F )
  seqzfval $p |- ( M e. V -> ( <. M , S >. seqz F ) = seq M ( S , F ) )

Eventually, these 3 can be phased out.

Mario also extended to domain of the square root function to include
all of CC.  The result is still uniquely defined, and corresponds
to the principal value described at
http://en.wikipedia.org/wiki/Square_root#Principal_square_root_of_a_complex_number

    df-sqr $a |- sqr = ( x e. CC |-> ( iota_ y e. CC ( ( y ^ 2 ) = x /\
      0 <_ ( Re ` y ) /\ ( _i x. y ) e/ RR+ ) ) ) $.

Many existing theorems were affected by these changes; see the list
at the top of set.mm for 8-Sep-2013.



(3-Jun-2013) Adding or deleting antecedents in a theorem
--------------------------------------------------------

Sometimes I might not initially know that a certain antecedent is
required for a proof, but discover it only when I'm deep into the proof.
Other times, I may have redundant antecedents that are not needed by the
proof.  And, sometimes I just want to rearrange the antecedents for
better appearance or easier use later on.  In all these cases, it is
annoying and time-consuming to have to re-enter the proof from scratch
to account for a modified conjunction of antecedents.

To make the task of editing antecedents easier, I use a submit script
called "unlink-ant.cmd" which is listed below at the end of this note.

This method has a limitation:  it assumes all antecedent linkages are
done via simp*, which will occur when we chain to referenced theorems
via syl*anc as described in the mmnotes.txt entry below of 19-Mar-2012.
In other situations, additional manual step deletion may be required, or
it might be possible to enhance the script.

As an example, we will use divdivdiv, which was described in the
19-Mar-2012 entry below.  You can follow these steps to test the script.

  MM> read set.mm
  MM> prove divdivdiv
  ...

If a proof is incomplete, make sure it is in the state left by 'unify
all/interactive' then 'improve all'.

From inside the Proof Assistant,
run the script to delete all steps linking antecedents:

  MM-PA> submit unlink-ant.cmd/silent

At this point, the proof will have the antecedent linkages stripped.
Replace the original proof in set.mm with this stripped-down proof,
either in an editor or with 'save new_proof' then 'write source'.

  MM-PA> show new_proof/normal
  Proof of "divdivdiv":
  ---------Clip out the proof below this line to put it in the source file:
      ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divcl syl111anc
      ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? mulcom syl11anc ? ? ? ? ? ? ? ? ? ?
      ? ? ? ? divmuldiv syl22anc eqtrd opreq2d ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
      ? ? ? ? ? ? ? ? ? ? ? ? divmuldiv syl22anc ? ? ? ? ? ? ? ? ? ? ? ? ? mulcom
      syl11anc opreq1d ? ? ? ? ? ? ? ? ? ? ? ? mulcl syl11anc ? ? ? ? ? ? ? ? ? ?
      ? ? mulne0 syl22anc ? divid syl11anc 3eqtrd opreq1d ? ? ? ? ? ? ? ? ? ? ? ?
      ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ?
      ? divcl syl111anc ? ? ? mulass syl111anc ? ? ? ? ? ? ? ? ? ? ? ? ? divcl
      syl111anc ? mulid2 syl 3eqtr3d eqtr3d ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divcl
      syl111anc ? ? ? ? ? ? ? ? ? ? ? ? ? mulcl syl11anc ? ? ? ? ? ? ? ? mulcl
      syl11anc ? ? ? ? ? ? ? mulne0 ad2ant2lr ? ? divcl syl111anc ? ? ? ? ? ? ? ?
      ? ? divcl syl111anc ? ? ? ? ? divne0 adantl ? ? ? divmul syl112anc mpbird
      $.
  ---------The proof of "divdivdiv" to clip out ends above this line.
  MM-PA>

You can now edit set.mm to rearrange, add, or delete antecedents.  When
done, go back into MM-PA:

  MM-PA> unify all/interactive
  MM-PA> improve all
  ...
  CONGRATULATIONS!  The proof is complete.

Note that the file 3.tmp produced by the script will contain a list of
all antecedents that are needed.  This is useful for determining whether
the theorem has unused ones.  (The list may be slightly redundant, as
the ones with /\ below show.)

  MM-PA> more 3.tmp
  ( B e. CC /\ B =/= 0 )
  ( C e. CC /\ C =/= 0 )
  ( D e. CC /\ D =/= 0 )
  A e. CC
  B =/= 0
  B e. CC
  C =/= 0
  C e. CC
  D =/= 0
  D e. CC


The file unlink-ant.cmd is listed below.

! unlink-ant.cmd - delete all steps linking antecedents to proof
! Run this at the MM-PA prompt with 'submit delant.cmd/silent' on a
! complete or incomplete proof.  Make sure the starting proof is in
! the state left by 'unify all/interactive' then 'improve all'.  Output:
!   1. The list of required antecedents is contained in 3.tmp.
!   2. The proof shown by 'show new_proof/normal' will have antecedent
!      linkages removed, so that the antecedent conjunction can be
!      edited (antecedents added, deleted, or reorganized).  Later,
!      'unify all/interactive' then 'improve all' will reconnect them.
!   3. The original set.mm is saved in 2.tmp in case something goes wrong.
! Temporary files used: 1.tmp, 2.tmp, 3.tmp, 4.tmp.
! We assume all antecedent linkages are done via simp*, which are normally
! chained to a referenced theorem via syl*anc (see mmnotes.txt entry
! of 19-Mar-2012).  In other situations, additional manual step deletion
! may be required.
save new_proof/compressed
write source 2.tmp
set width 9999
open log 1.tmp
show new_proof
close log
set width 79
tools
copy 1.tmp 3.tmp
! prevent simpld, simprd deletion
substitute 1.tmp '=simpld' '=ximpld' a ''
substitute 1.tmp '=simprd' '=ximprd' a ''
! extract the steps to be deleted into 1.tmp
match 1.tmp '=simp' ''
clean 1.tmp ber
delete 1.tmp ' '  ''
add 1.tmp 'delete step ' ''
reverse 1.tmp
! get list of all required antecedents into 3.tmp
copy 3.tmp 4.tmp
match 3.tmp '=simp' ''
match 4.tmp '=? ' ''
copy 3.tmp,4.tmp 3.tmp
delete 3.tmp '' ' ->'
unduplicate 3.tmp
clean 3.tmp ber
add 3.tmp '' '$'
substitute 3.tmp ' )$' '' 1 ''
quit
submit 1.tmp
delete floating_hypotheses
!(end of unlink-ant.cmd)

-------------------------------------------------------------------------------


(22-May-2013) New metamath program features
-------------------------------------------

1. A /FORBID qualifier was added to MINIMIZE_WITH.  Stronger than
/EXCEPT, it will also exclude any statements that _depend_ on the
statements in the list (based on the algorithm for SHOW TRACE_BACK).
For example,

  MM-PA> MINIMIZE_WITH * /FORBID ax-inf*,ax-ac

will prevent statements depending on ax-inf, ax-inf2, and ax-ac from
being used.

2. A /MATCH qualifier was added to SHOW_TRACEBACK.  For example,

  MM> SHOW TRACE_BACK cp /AXIOMS/MATCH ax-ac,ax-inf*

will list only "ax-inf2" instead of a long list of axioms and
definitions to sort through.

These changes are in Version 0.07.91 20-May-2013.



(18-May-2013) Separate axioms for complex numbers
-------------------------------------------------

The real and complex numbers are now derived from a set of axioms
separate from their construction.  This isolates them better from the
construction.  It also provides more meaningful information in the
'SHOW TRACE_BACK /ESSENTIAL /AXIOMS' command.

The construction theorem that derives the axiom is called the same name
except that the prefix "ax-" is changed to "ax".  For example, the axiom
for closure of addition is ax-addcl, and the theorem that derives it is
axaddcl.



(27-Feb-2013) *OLD cleanup by Scott Fenton
------------------------------------------

Scott Fenton did a large cleanup of set.mm by eliminating many
*OLD references.  The following proofs were changed:

  oancom ficardom alephon omsublim domtriomlem axdc3lem2 axcclem cfom
  lemul1i lemul2i lemul1a ltmul12a mulgt1 ltmulgt11 gt0div ge0div ltdiv2
  lediv2 lediv12a ledivp1 ledivp1i flhalf nnwo infmssuzcl expord2 expmwordi
  exple1 sqlecan sqeqori crreczi facwordi faclbnd faclbnd6 facavg
  fsumabs2mul 0.999... cvgratlem1ALT cvgratlem1 erelem3 efaddlem11
  efaddlem15 efaddlem16 efaddlem20 efaddlem22 eftlex ef1tllem ef01tllem1
  eflti efcnlem2 sin01bndlem2 cos01bndlem2 cos2bnd sin02gt0 sin4lt0
  alephsuc3 bcthlem1 minveclem27 cospi cos2pi sinq12gt0t hvsubcan hvsubcan2
  bcs2 norm1exi chocunii projlem18 pjthlem10 pjthlem12 omlsilem pjoc1i
  shscli shsvs shsvsi shsleji shsidmi spanuni h1de2bi h1de2ctlem spansni
  spansnmul spansnss spanunsni hoscl hodcl osumlem2 sumspansn spansncvi
  pjaddii pjmulii pjss2i pjssmii pjocini hoaddcomi hodsi hoaddassi
  hocadddiri hocsubdiri nmopub2tALT nmfnleub2 hoddii lnophsi hmops
  nmcopexlem3 nmcopexlem5 nmcfnexlem3 nmcfnexlem5 cnlnadjlem2 cnlnadjlem7
  nmopadjlem adjadd nmoptrii nmopcoadji leopadd leopmuli leopnmid
  hmopidmchlem pjsdii pjddii pjscji pjtoi strlem1 sumdmdii cdjreui cdj1i
  cdj3lem1 nndivsub epos intnat atcvrne atcvrj2b cvrat4 2llnm3 2llnm4
  cdlema2 pmapjat1 2polcon4b paddun lhpocnle lhpmat idltrn ltrnmw trl0
  ltrnnidn cdleme2



(25-Feb-2013) Sethood antecedents
---------------------------------

I changed many antecedents to use e.g.  "A e. V" instead of "A e. B" to
indicate that A must be a set.  For example, uniexg was changed from
"( A e. B -> U. A e. _V )" to "( A e. V -> U. A e. _V )".  I think the
variable V is more suggestive of the _V that it will often be replaced
with, better indicating the purpose of the antecedent.

I changed only theorems whose $f order would not change, so there is no
impact on any proofs.  Eventually, we could change the others such as
funopfvg, but it would mean changing all the proofs referencing them, so
it probably won't be done soon.  But I think "A e. V" is a good
convention to follow for future theorems.



(21-Feb-2013) Changes to syl*
-----------------------------

I changed the order of the hypotheses of 76 syl* theorems for a better
logical "flow", per a suggestion from Mario Carneiro.  This was a big
change, affecting about 6400 proofs (about 1/3 of set.mm).  If you
have made changes to your mathbox that aren't in set.mm, I can update
your mathbox for you if you send it along with the set.mm it works with.

You can also update it yourself as follows.

Step 1.  Copy your mathbox into a file called mathbox.mm.

Step 2.  Copy and paste the following lines into the MM> prompt
(or put them in a SUBMIT script if you wish):

    tools
    copy mathbox.mm tmp.mm
    add tmp.mm '' ' '
    substitute tmp.mm ' sylanOLD ' ' sylanOLDOLD ' all ''
    substitute tmp.mm ' sylan ' ' sylanOLD ' all ''
    substitute tmp.mm ' sylanb ' ' sylanbOLD ' all ''
    substitute tmp.mm ' sylanbr ' ' sylanbrOLD ' all ''
    substitute tmp.mm ' sylan2OLD ' ' sylan2OLDOLD ' all ''
    substitute tmp.mm ' sylan2 ' ' sylan2OLD ' all ''
    substitute tmp.mm ' sylan2b ' ' sylan2bOLD ' all ''
    substitute tmp.mm ' sylan2br ' ' sylan2brOLD ' all ''
    substitute tmp.mm ' syl2an ' ' syl2anOLD ' all ''
    substitute tmp.mm ' syl2anb ' ' syl2anbOLD ' all ''
    substitute tmp.mm ' syl2anbr ' ' syl2anbrOLD ' all ''
    substitute tmp.mm ' syland ' ' sylandOLD ' all ''
    substitute tmp.mm ' sylan2d ' ' sylan2dOLD ' all ''
    substitute tmp.mm ' syl2and ' ' syl2andOLD ' all ''
    substitute tmp.mm ' sylanl1 ' ' sylanl1OLD ' all ''
    substitute tmp.mm ' sylanl2 ' ' sylanl2OLD ' all ''
    substitute tmp.mm ' sylanr1 ' ' sylanr1OLD ' all ''
    substitute tmp.mm ' sylanr2 ' ' sylanr2OLD ' all ''
    substitute tmp.mm ' sylani ' ' sylaniOLD ' all ''
    substitute tmp.mm ' sylan2i ' ' sylan2iOLD ' all ''
    substitute tmp.mm ' syl2ani ' ' syl2aniOLD ' all ''
    substitute tmp.mm ' sylancl ' ' sylanclOLD ' all ''
    substitute tmp.mm ' sylancr ' ' sylancrOLD ' all ''
    substitute tmp.mm ' sylanbrc ' ' sylanbrcOLD ' all ''
    substitute tmp.mm ' sylancb ' ' sylancbOLD ' all ''
    substitute tmp.mm ' sylancbr ' ' sylancbrOLD ' all ''
    substitute tmp.mm ' syl3an1OLD ' ' syl3an1OLDOLD ' all ''
    substitute tmp.mm ' syl3an1 ' ' syl3an1OLD ' all ''
    substitute tmp.mm ' syl3an2 ' ' syl3an2OLD ' all ''
    substitute tmp.mm ' syl3an3 ' ' syl3an3OLD ' all ''
    substitute tmp.mm ' syl3an1b ' ' syl3an1bOLD ' all ''
    substitute tmp.mm ' syl3an2b ' ' syl3an2bOLD ' all ''
    substitute tmp.mm ' syl3an3b ' ' syl3an3bOLD ' all ''
    substitute tmp.mm ' syl3an1br ' ' syl3an1brOLD ' all ''
    substitute tmp.mm ' syl3an2br ' ' syl3an2brOLD ' all ''
    substitute tmp.mm ' syl3an3br ' ' syl3an3brOLD ' all ''
    substitute tmp.mm ' syl3an ' ' syl3anOLD ' all ''
    substitute tmp.mm ' syl3anb ' ' syl3anbOLD ' all ''
    substitute tmp.mm ' syl3anbr ' ' syl3anbrOLD ' all ''
    substitute tmp.mm ' syld3an3 ' ' syld3an3OLD ' all ''
    substitute tmp.mm ' syld3an1 ' ' syld3an1OLD ' all ''
    substitute tmp.mm ' syld3an2 ' ' syld3an2OLD ' all ''
    substitute tmp.mm ' syl3anl1 ' ' syl3anl1OLD ' all ''
    substitute tmp.mm ' syl3anl2 ' ' syl3anl2OLD ' all ''
    substitute tmp.mm ' syl3anl3 ' ' syl3anl3OLD ' all ''
    substitute tmp.mm ' syl3anl ' ' syl3anlOLD ' all ''
    substitute tmp.mm ' syl3anr1 ' ' syl3anr1OLD ' all ''
    substitute tmp.mm ' syl3anr2 ' ' syl3anr2OLD ' all ''
    substitute tmp.mm ' syl3anr3 ' ' syl3anr3OLD ' all ''
    substitute tmp.mm ' syl5OLD ' ' syl5OLDOLD ' all ''
    substitute tmp.mm ' syl5com ' ' syl5comOLD ' all ''
    substitute tmp.mm ' syl5 ' ' syl5OLD ' all ''
    substitute tmp.mm ' syl5d ' ' syl5dOLD ' all ''
    substitute tmp.mm ' syl5ib ' ' syl5ibOLD ' all ''
    substitute tmp.mm ' syl5ibr ' ' syl5ibrOLD ' all ''
    substitute tmp.mm ' syl5bi ' ' syl5biOLD ' all ''
    substitute tmp.mm ' syl5cbi ' ' syl5cbiOLD ' all ''
    substitute tmp.mm ' syl5bir ' ' syl5birOLD ' all ''
    substitute tmp.mm ' syl5cbir ' ' syl5cbirOLD ' all ''
    substitute tmp.mm ' syl5bb ' ' syl5bbOLD ' all ''
    substitute tmp.mm ' syl5rbb ' ' syl5rbbOLD ' all ''
    substitute tmp.mm ' syl5bbr ' ' syl5bbrOLD ' all ''
    substitute tmp.mm ' syl5rbbr ' ' syl5rbbrOLD ' all ''
    substitute tmp.mm ' syl5eq ' ' syl5eqOLD ' all ''
    substitute tmp.mm ' syl5req ' ' syl5reqOLD ' all ''
    substitute tmp.mm ' syl5eqr ' ' syl5eqrOLD ' all ''
    substitute tmp.mm ' syl5reqr ' ' syl5reqrOLD ' all ''
    substitute tmp.mm ' syl5eqel ' ' syl5eqelOLD ' all ''
    substitute tmp.mm ' syl5eqelr ' ' syl5eqelrOLD ' all ''
    substitute tmp.mm ' syl5eleq ' ' syl5eleqOLD ' all ''
    substitute tmp.mm ' syl5eleqr ' ' syl5eleqrOLD ' all ''
    substitute tmp.mm ' syl5eqner ' ' syl5eqnerOLD ' all ''
    substitute tmp.mm ' syl5ss ' ' syl5ssOLD ' all ''
    substitute tmp.mm ' syl5ssr ' ' syl5ssrOLD ' all ''
    substitute tmp.mm ' syl5eqbr ' ' syl5eqbrOLD ' all ''
    substitute tmp.mm ' syl5eqbrr ' ' syl5eqbrrOLD ' all ''
    substitute tmp.mm ' syl5breq ' ' syl5breqOLD ' all ''
    substitute tmp.mm ' syl5breqr ' ' syl5breqrOLD ' all ''
    substitute tmp.mm ' syl7OLD ' ' syl7OLDOLD ' all ''
    substitute tmp.mm ' syl7 ' ' syl7OLD ' all ''
    substitute tmp.mm ' syl7ib ' ' syl7ibOLD ' all ''
    substitute tmp.mm ' syl6ss ' ' syl6sseq ' all ''
    substitute tmp.mm ' syl6ssr ' ' syl6sseqr ' all ''
    clean tmp.mm e
    copy tmp.mm mathbox.mm
    exit

Step 3.  Place mathbox.mm into the most recent set.mm.
'Verify proof *' should show no errors related to these changes.
You may want to delete the tmp.mm* files to clean up your directory.

Step 4 (optional):  Replace the syl*OLD references in your new set.mm
with the new versions.  You can find these with 'show usage syl*OLD'.
For example, suppose your theorem 'xyz' uses sylanOLD.  In the Proof
Assistant, type in the following commands (or run them from a script):

    MM> prove xyz
    MM-PA> minimize_with syl*an*,syl5*,syl7*/except *OLD/allow_growth
    MM-PA> save new_proof/compressed
    MM-PA> exit

If you don't replace the syl*OLDs, then I will do that when you next
submit your mathbox using my scripts; it is an easy change for me.  I
will keep the syl*OLDs in set.mm for about a year.

In the new syl5* names, "ib" and "bi" are swapped to reflect the
swapping of implication and biconditional in the new hypothesis order.
See the list at the top of set.mm for a few other renamings.



(4-Nov-2012) Structures have been renamed
-----------------------------------------

I made the changes of the 30-Oct-2012 proposal below.  All changes can
be seen at the top of set.mm in the 4-Nov-12 entries.

I also changed conflicting labels so that the "NEW" suffix could be
removed from the new structure theorems.  There are about 50 labels
with "NEW" removed, and another 50 or so older labels were renamed
to prevent conflict.  The old label renames generally follow this
scheme, where the added "o" stands for the "Op" added to "GrpOp" etc.:

  *grp* becomes *grpo*
  *abl* becomes *ablo*
  *ring* becomes *ringo*
  *divrng* becomes *divrngo*

Note that only conflicting ones were renamed.  Thus, for example,
isgrp2i for GrpOp was _not_ renamed to "isgrpo2i" because it didn't
conflict with any "NEW".  I didn't rename them in order to keep the
size of the change list smaller.

If people prefer that I rename all the GrpOp labels from *grp* to *grpo*
for better consistency, that is no big deal for me.  Let me know.


(30-Oct-2012) Proposed renaming of structures
              -------------------------------

For a few structures in the literature, the pair that includes a base
set is called a "space", such as topological space or metric space.  In
this case, the main name (without "space") is one of the pair's
elements:  the topology on a (base) set, the metric of a metric space.
This is reflected in set.mm's current naming:  Top vs.  TopSp, Met vs.
MetSp.

For most other structures, the name usually refers to the entire
structure.  In modern literature, a group is almost alway an ordered
pair of a base set and an operation.

Prior to set theory, such as Cayley (1854), a group was a base set
"accompanied" by an operation in a way not formally defined.  Ordered
pairs had not been formally defined at this point in time.  Although a
group member was a member of the base set, I doubt that the base set in
isolation, independent of any operation, would have been called a group.

For brevity, modern authors sometimes also use "group" to mean the "base
set of the group", using context to disambiguate.  For example, a group
member is a member of the base set, since that is the only meaning that
makes sense.  And most of them explain this informal dual usage.  But
the formal meaning is still understood to be the ordered pair, which is
how they (in modern literature) usually define "group".

On the other hand, I don't recall ever seeing the isolated operation
itself referred to as a group.

I propose the following renaming to suggest, at least to some extent,
the literature usage.

  OLD       NEW

  Open      MetOpen     open sets of a metric space
  Grp       GrpOp       an operation for a group
  GrpNEW    Grp         a group
  SubGrp    SubGrpOp    the operation for a subgroup of a group
  Ring      RingOps     the pair of operations for a ring
  DivRing   DivRingOps  the pair of operations for a division ring
  Poset     PosetRel    an ordering relation for a poset
  PosetNEW  Poset       a poset
  Lat       LatRel      an ordering relation for a lattice
  LatNEW    Lat         a lattice
  Dir       DirRel      an ordering relation for a directed set
  Toset     TosetRel    an ordering relation for a totally ordered set
  TopSet    TopOpen     the topology extractor for a TopSp

For structures that exist only in mathboxes, I won't do any renaming,
but I may rename them if they are moved to the main set.mm.

I may also rename CVec, NrmCVec, CPreHil, CBan, CHil to CVecOps, etc.,
although I need to figure out what to with them.  In a way they are too
specialized (for CC rather than general fields).  I might move them to
my mathbox, since I don't think anyone is using them.



(17-Sep-2012) Differences between REPLACE and ASSIGN
              --------------------------------------

ASSIGN can only accept step numbers that haven't been assigned yet
(i.e. those in SHOW NEW_PROOF/UNKNOWN).  REPLACE can potentially
accept any step number.

The assignment of $e and $f hypotheses of the theorem being proved must
be done with ASSIGN.  REPLACE does not (currently) allow them.

REPLACE fails if it does not find a complete subproof for the
step being replaced.  ASSIGN fails only if the conclusion of the
assigned statement cannot be unified with the unknown step.

REPLACE takes longer to run than ASSIGN.  In fact, the algorithm
used by REPLACE is the same as IMPROVE <step> /3 /DEPTH 1 /SUBPROOF
but using only the specified statement rather than scanning the entire
database.

Unlike IMPROVE, REPLACE can be used with steps having "shared" working
variables ($nn).  A "shared" working variable means that it is used
outside of any existing subproof of the replaced step.  In this case,
REPLACE attempts a guess at an assignment for the working variable, and
usually seems to be right.  However, it is possible that this
"aggressive" assignment can be wrong, and REPLACE will issue the
warning,

    Assignments to shared working variables ($nn) are guesses.  If
    incorrect, to undo DELETE STEP <step>, INITIALIZE, UNIFY, then assign
    them manually with LET and try REPLACE again.

I have also found that I can usually recover with just DELETE FLOATING,
INITIALIZE, UNIFY.  This is what I try first so that I can salvage what
REPLACE found.



(15-Sep-2012) Enhancements to label and step number arguments
              -----------------------------------------------

Version 0.07.81 (14-Sep-2012) of the metamath program has two minor
enhancements.  I added the first one primarily to make the REPLACE and
ASSIGN syntax uniform.  I have found it useful to try e.g.  "REPLACE
LAST abc" first, then if it fails use "ASSIGN LAST abc".  When REPLACE
is successful, it eliminates the tedium of manually assigning the
hypotheses.  In the future, we could have REPLACE automatically call
ASSIGN when it fails, but I want to get some experience with it first.

Enhancement #1:  Statement numbers can now be specified with +nn, which
means the nn'th unassigned step from the first in SHOW
NEW_PROOF/UNKNOWN.  This complements the previously existing -nn, which
means the nn'th step from the last in SHOW NEW_PROOF/UNKNOWN.  This
change affects ASSIGN, REPLACE, IMPROVE, and LET STEP and makes their
step their step specification argument uniform.

For example, consider:

    MM-PA> show new_proof/unknown
    6   mpd.1=? $? |- ( ph -> ( ph -> ph ) )
    7   mpd.2=? $? |- ( ph -> ( ( ph -> ph ) -> ph ) )

ASSIGN LAST or ASSIGN -0 refers to step 7.
ASSIGN -1 refers to step 6.
ASSIGN FIRST or ASSIGN +0 refers to step 6.
ASSIGN +1 refers to step 7.
ASSIGN 6 refers to step 6.
ASSIGN +6 is illegal, because there aren't at least 7 unknown steps.
ASSIGN 1 is illegal, because step 1 isn't unknown.

Enhancement #2:  The unique label argument needed for PROVE, ASSIGN, and
REPLACE now allows wildcards, provided that there is a unique match.
This can sometimes save typing.  For example, "into*3" matches only the
theorem "intopcoaconlem3".  So instead of typing the long theorem name,
you can use "into*3" as a shortcut:

    MM> prove into*3
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to
    exit.
    You will be working on statement (from "SHOW STATEMENT intopcoaconlem3"):

If you use too few characters for a unique match, the program will tell you:

    MM> prove int*3
    ?This command requires a unique label, but there are 3 (allowed)
    matches for "int*3".  The first 2 are "intmin3" and "inttop3".  Use
    SHOW LABELS "int*3" to see all matches.

Of course, you can check in advance to see if it's unique by typing
SHOW LABELS into*3.

It is also now easier to assign $e hypotheses.  For example,

    MM-PA> ASSIGN LAST *.a

will assign the $e hypothesis ending with ".a", because there are no
$a or $p statements in set.mm ending with ".a".  (Any $e hypothesis
not belonging to the statement being proved are ignored by the
wildcard scan.)



(12-Sep-2012) New IMPROVE qualifiers and improved REPLACE command
              ---------------------------------------------------

New IMPROVE qualifiers
----------------------

Version 0.07.80 (4-Sep-2012) of the metamath program has the additional
qualifiers /2, /3, and /SUBPROOFS for the IMPROVE command.  When using
this version, and especially the new qualifiers, please save your work
since the runtime can be unpredictable (possibly hours).  There is no
way to cancel except to abort the program.  And, because of the new
code, there is the possibility of a bug I didn't find in testing.  (If
you find a bug, please let me know, of course.)

From HELP IMPROVE:

        / 1 - Use the traditional search algorithm used in earlier versions
            of this program.  It is the default.  It tries to match cut-free
            statements only (those having not having variables in their
            hypotheses that are not in the conclusion).  It is the fastest
            method when it can find a proof.
        / 2 - Try to match statements with cuts.  It also tries to match
            steps containing working ($nn) variables when they don't share
            working variables with the rest of the proof.  It runs slower
            than / 1.
        / 3 - Attempt to find (cut-free) proofs of $e hypotheses that result
            from a trial match, unlike / 2, which only attempts (cut-free)
            proofs of $f hypotheses.  It runs much slower than / 1, and you
            may prefer to use it with specific statements.  For example, if
            step 456 is unknown, you may want to use IMPROVE 456 / 3 rather
            than IMPROVE ALL / 3.  Note that / 3 respects the / DEPTH
            qualifier, although at the expense of additional run time.
        / SUBPROOFS - Look at each subproof that isn't completely known, and
            try to see if it can be proved independently.  This qualifier is
            meaningful only for IMPROVE ALL / 2 or IMPROVE ALL / 3.  It may
            take a very long time to run, especially with / 3.

    Note that / 2 includes the search of / 1, and / 3 includes / 2.
    Specifying / 1 / 2 / 3 has the same effect as specifying just / 3, so
    there is no need to specify more than one.  Finally, since / 1 is the
    default, you never need to use it; it is included for completeness (or
    in case the default is changed in the future).

Here is an example you can duplicate if you want:

    MM> prove eqtr3i
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
    You will be working on statement (from "SHOW STATEMENT eqtr3i"):
    5507 eqtr3i.1 $e |- A = B $.
    5508 eqtr3i.2 $e |- A = C $.
    5509 eqtr3i $p |- B = C $= ... $.
    Note:  The proof you are starting with is already complete.
    MM-PA> delete all
    The entire proof was deleted.
    1 eqtr3i=? $? |- B = C
    MM-PA> improve all
    No new subproofs were found.
    MM-PA> improve all /2
    Pass 1:  Trying to match cut-free statements...
    Pass 2:  Trying to match all statements...
    No new subproofs were found.
    MM-PA> improve all /3
    Pass 1:  Trying to match cut-free statements...
    Pass 2:  Trying to match all statements, with cut-free hypothesis matches...
    No new subproofs were found.
    MM-PA> improve all /3 /depth 1
    Pass 1:  Trying to match cut-free statements...
    Pass 2:  Trying to match all statements, with cut-free hypothesis matches...
    A proof of length 9 was found for step 1.
    Steps 1 and above have been renumbered.
    CONGRATULATIONS!  The proof is complete.  Use SAVE NEW_PROOF to save it.
    Note:  The Proof Assistant does not detect $d violations.  After saving
    the proof, you should verify it with VERIFY PROOF.
    MM-PA> show new_proof
    6     eqcomi.1=eqtr3i.1 $e |- A = B
    7   eqtri.1=eqcomi    $p |- B = A
    8   eqtri.2=eqtr3i.2  $e |- A = C
    9 eqtr3i=eqtri      $p |- B = C
    MM-PA>

Explanation:

IMPROVE ALL /1 (default i.e. old algorithm) didn't even consider eqtri
because it is not cut free i.e. it has variable "A" in the hypotheses
that isn't in the conclusion.

IMPROVE ALL /2 did not prove it because there was no match to hypothesis
eqtri.1 "|- B = A" in the hypotheses (or other parts of the proof, if
there were any) of eqtr3i.

IMPROVE ALL /3 alone didn't prove it because there was no statement with
0 $e hypotheses that matched eqtri.1 "|- B = A".  It did not consider
eqcomi, which has a $e hypothesis.

IMPROVE ALL /3 /DEPTH 1 considered eqcomi, which has 1 $e hypothesis,
and the hypothesis eqcomi.1 matched eqtr3i.1, proving the theorem.

Each of these qualifiers takes longer to run than the previous.  In
addition, it is better to try depth 0 (default) first, then /depth 1,
then /depth 2,... because (in addition to much greater runtime) each
increasing depth may result in a less efficient proof.  For example,
'improve /3 /depth 2' finds a proof, but it will have a redundant use of
idi (try it).

The /SUBPROOF qualifier
-----------------------

The /SUBPROOF qualifier is occasionally useful if you have a proof that
is a tangled mess with many unknown steps, and you want to see if
something simpler will prove parts of it.  Originally I had /SUBPROOF as
the default for /2 and /3, but later I made it a separate qualifier
because it can take a huge amount of runtime, especially with /3.

The /SUBPROOF qualifier found a proof for the following (somewhat
contrived) example that you can duplicate if you want.

    MM> prove dalem22
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
    You will be working on statement (from "SHOW STATEMENT dalem22"):
    70869 dalem.ph $e |- ( ph <-> ( ( ( K e. HL /\ C e. A ) /\ ( P e. A /\ Q e.
          A /\ R e. A ) /\ ( S e. A /\ T e. A /\ U e. A ) ) /\ ( Y e. O /\ Z e.
          O ) /\ ( ( -. C L ( P J Q ) /\ -. C L ( Q J R ) /\ -. C L ( R J P ) )
          /\ ( -. C L ( S J T ) /\ -. C L ( T J U ) /\ -. C L ( U J S ) ) /\ (
          C L ( P J S ) /\ C L ( Q J T ) /\ C L ( R J U ) ) ) ) ) $.
    70870 dalem.l $e |- L = ( le ` K ) $.
    70871 dalem.j $e |- J = ( join ` K ) $.
    70872 dalem.a $e |- A = ( Atoms ` K ) $.
    70873 dalem.ps $e |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c L Y /\ ( d =/=
          c /\ -. d L Y /\ C L ( c J d ) ) ) ) $.
    70905 dalem22.o $e |- O = ( LPlanes ` K ) $.
    70906 dalem22.y $e |- Y = ( ( P J Q ) J R ) $.
    70907 dalem22.z $e |- Z = ( ( S J T ) J U ) $.
    70908 dalem22 $p |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) J ( P J S ) ) e.
          O ) $= ... $.
    Note:  The proof you are starting with is already complete.
    MM-PA> delete step 70
    A 34-step subproof at step 70 was deleted.  Steps 70:276 are now 37:243.
     37   mpbid.min=?           $? |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) (
                                                  meet ` K ) ( P J S ) ) e. A )
    MM-PA> assign last dalem21
    To undo the assignment, DELETE STEP 65 and if needed INITIALIZE, UNIFY.
     56     dalem.ph=?            $? |- ( ph <-> ( ( ( $12 e. HL /\ $13 e. A )
    /\ ( P e. A /\ $14 e. A /\ $15 e. A ) /\ ( S e. A /\ $16 e. A /\ $17 e. A )
    ) /\ ( Y e. $18 /\ Z e. $18 ) /\ ( ( -. $13 $19 ( P J $14 ) /\ -. $13 $19 (
      $14 J $15 ) /\ -. $13 $19 ( $15 J P ) ) /\ ( -. $13 $19 ( S J $16 ) /\ -.
    $13 $19 ( $16 J $17 ) /\ -. $13 $19 ( $17 J S ) ) /\ ( $13 $19 ( P J S ) /\
                         $13 $19 ( $14 J $16 ) /\ $13 $19 ( $15 J $17 ) ) ) ) )
     57     dalem.l=?             $? |- $19 = ( le ` $12 )
     58     dalem.j=?             $? |- J = ( join ` $12 )
     59     dalem.a=?             $? |- A = ( Atoms ` $12 )
     60     dalem.ps=?            $? |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c
                    $19 Y /\ ( d =/= c /\ -. d $19 Y /\ $13 $19 ( c J d ) ) ) )
     61     dalem21.m=?           $? |- ( meet ` K ) = ( meet ` $12 )
     62     dalem21.o=?           $? |- $18 = ( LPlanes ` $12 )
     63     dalem21.y=?           $? |- Y = ( ( P J $14 ) J $15 )
     64     dalem21.z=?           $? |- Z = ( ( S J $16 ) J $17 )
    MM-PA> improve all /2 /subproof
    Pass 1:  Trying to match cut-free statements...
    A proof of length 1 was found for step 55.
    A proof of length 1 was found for step 54.
    A proof of length 1 was found for step 53.
    A proof of length 1 was found for step 52.
    A proof of length 3 was found for step 50.
    A proof of length 1 was found for step 47.
    A proof of length 1 was found for step 44.
    A proof of length 1 was found for step 41.
    A proof of length 1 was found for step 39.
    A proof of length 1 was found for step 38.
    A proof of length 1 was found for step 37.
    Pass 2:  Trying to match all statements...
    Pass 3:  Trying to replace incomplete subproofs...
    A proof of length 34 was found for step 67.
    Steps 37 and above have been renumbered.
    CONGRATULATIONS!  The proof is complete.  Use SAVE NEW_PROOF to save it.
    Note:  The Proof Assistant does not detect $d violations.  After saving
    the proof, you should verify it with VERIFY PROOF.
    MM-PA>

In this case, just before "IMPROVE ALL /2/SUBPROOF", step 37 (which
became step 65 after the last assignment) had the known content "$p |- (
( ph /\ Y = Z /\ ps ) -> ( ( c J d ) ( meet ` K ) ( P J S ) ) e. A )".
However, the subproof ending at dalem21 contained a hopeless mess of $nn
work variables.

The /subproof algorithm did the following.  It scanned the proof and saw
that step 65 had an incomplete subproof.  So it then tried to prove step
65 (using the /2 algorithm) completely independently of the existing
incomplete subproof and found a new proof.  It then deleted the existing
subproof and replaced it with the one it found..  It didn't matter what
the existing subproof had:  it could have been total "garbage" with
nothing at all to do with the final proof that was found.

I don't yet have a good feel for when /subproof will help or not.
However, sometimes it can take a very long time to run, so you should
"save new_proof" then "write source" before you try it, in case you have
to abort it.

Improved REPLACE command
------------------------

Recall the first part of the previous example:

    MM> prove dalem22
    Entering the Proof Assistant.  HELP PROOF_ASSISTANT for help, EXIT to exit.
    You will be working on statement (from "SHOW STATEMENT dalem22"):
    70869 dalem.ph $e |- ( ph <-> ( ( ( K e. HL /\ C e. A ) /\ ( P e. A /\ Q e.
          A /\ R e. A ) /\ ( S e. A /\ T e. A /\ U e. A ) ) /\ ( Y e. O /\ Z e.
          O ) /\ ( ( -. C L ( P J Q ) /\ -. C L ( Q J R ) /\ -. C L ( R J P ) )
          /\ ( -. C L ( S J T ) /\ -. C L ( T J U ) /\ -. C L ( U J S ) ) /\ (
          C L ( P J S ) /\ C L ( Q J T ) /\ C L ( R J U ) ) ) ) ) $.
    70870 dalem.l $e |- L = ( le ` K ) $.
    70871 dalem.j $e |- J = ( join ` K ) $.
    70872 dalem.a $e |- A = ( Atoms ` K ) $.
    70873 dalem.ps $e |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c L Y /\ ( d =/=
          c /\ -. d L Y /\ C L ( c J d ) ) ) ) $.
    70905 dalem22.o $e |- O = ( LPlanes ` K ) $.
    70906 dalem22.y $e |- Y = ( ( P J Q ) J R ) $.
    70907 dalem22.z $e |- Z = ( ( S J T ) J U ) $.
    70908 dalem22 $p |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) J ( P J S ) ) e.
          O ) $= ... $.
    Note:  The proof you are starting with is already complete.
    MM-PA> delete step 70
    A 34-step subproof at step 70 was deleted.  Steps 70:276 are now 37:243.
     37   mpbid.min=?           $? |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) (
                                                  meet ` K ) ( P J S ) ) e. A )

Suppose we know in advance that dalem21 is the correct statement to
assign to step 37.  If we type "ASSIGN LAST dalem21", we get the 9
hypotheses, as shown in the previous section, that have to be assigned
by hand; even worse, there are a lot of $nn work variables in those
hypotheses that we have to figure out.

The REPLACE command has been enhanced so that it can "replace" any step
at all, even unknown steps that have not been assigned yet, provided
that it can find a match to all of the hypotheses of the REPLACE
label argument.

So above, instead of "ASSIGN LAST dalem21", we can type
"REPLACE LAST dalem21".  We will see:

    MM-PA> replace last dalem21
    CONGRATULATIONS!  The proof is complete.  Use SAVE NEW_PROOF to save it.
    Note:  The Proof Assistant does not detect $d violations.  After saving
    the proof, you should verify it with VERIFY PROOF.

Thus we have eliminated the tedium of having to assign by hand the 9
hypotheses resulting from ASSIGN.  If REPLACE cannot find matches for
all the hypotheses, it will say so, and in that case you will have to go
back to ASSIGN to complete the proof.  But in a lot of cases, it may be
worth a try.

Note that you can also REPLACE a step that has been assigned, even with
the same label it has been assigned to.  For example, if we typed
"ASSIGN LAST dalem21", then step 37 would become step 65, and we can
then try "REPLACE 65 dalem21".



(24-Mar-2012) MINIMIZE_WITH usage suggestions
              -------------------------------

Recently, the metamath program was changed so the the scan order in the
MINIMIZE_WITH command is now bottom to top instead of top to bottom.  I
chose this because empirically, slightly shorter proofs often result
because more "advanced" theorems get used first.  For large proofs, it
can be useful to run the original order using the new /REVERSE
qualifier, to see which order results in the smallest proof.

The script that I almost always use is the following:

  MINIMIZE_WITH * /BRIEF /EXCEPT 3*tr*,3syl,*OLD,ee*
  MINIMIZE_WITH 3*tr* /BRIEF /EXCEPT *OLD
  SHOW NEW_PROOF /COMPRESSED

Running the 3*tr*'s last sometimes results in a more optimal use of
3bitr*, 3eqtr*, etc.  The ee* exclusion forces standard theorems to
be used in place of identical ones in Alan Sare's utility set.

You may need to use /NO_DISTINCT with some theorems to avoid
distinct variable conflicts.  (At some future point hopefully
this won't be required.)

I've noticed that minimizing with 3syl often increases rather than
decreases the size of the compressed proof.  If I suspect 3syl could
be useful, I save the proof, then MINIMIZE_WITH 3syl; if the compressed
proof becomes shorter, I use it, otherwise I discard it.

On larger proofs, I save the source before minimization into
separate file (SAVE NEW_PROOF then WRITE SOURCE xxx), then run
the following script on the saved file:

  MINIMIZE_WITH * /BRIEF /EXCEPT 3*tr*,3syl,*OLD,ee* /REVERSE
  MINIMIZE_WITH 3*tr* /BRIEF /EXCEPT *OLD /REVERSE
  SHOW NEW_PROOF /COMPRESSED

I then pick whichever compressed proof is the shortest.

Finally, recall that mathboxes are excluded unless you use the
/INCLUDE_MATHBOXES (/I) qualifier.  Right now, your own mathbox is also
excluded, unfortuately; I hope to fix this in a future release.



(19-Mar-2012) A new approach to antecedent manipulation
              -----------------------------------------

In the 18-Aug-2011 entry below, I described the problem of having to
"waste" proof steps to manipulate antecedents in order to match the
antecedents of referenced theorems.  My initial proposal, which was to
adopt "canonical" parenthesizations, had a number of problems described
in the 31-Aug-2011 entry below, and I gave up on that approach.

Here I will described a new approach I have been using in my mathbox for
a while and seems to work reasonably well.  I think it is much easier to
use than the sequence of antecedent commuting (such as ancom1s),
re-parenthesizations (such as anassrs), and adding antecendents (such as
ad2ant2r) that has "traditionally" been used in set.mm and which can be
tedious and frustrating to do optimally.  Moreover, it often results in
a shorter compressed proof.  The uncompressed proof tends to be longer,
but usually this isn't a concern since we ordinarily store compressed
proofs.

Note that the new method is not a requirement for new proofs but is
simply a suggested method available to the user.

At the end of this entry I compare an example proof (divdivdiv)
using the old and new methods.


New utility theorems
--------------------

For use by the new method, I added to set.mm 117 simp* theorems
(simplification to a single conjunct, simpll1 through simp333) and 28
syl*anc theorems (syllogism with contraction, syl11anc through
syl333anc).  The simp* theorems will handle all possible simplifications
of double and triple conjunctions nested up to 2 levels deep.  The
syl*anc theorems will handle all possible target antecedents with double
and triple conjunctions nested up to 1 level deep.

Even though this is a lot of theorems, I think that over time they will
result in a set.mm size reduction via shorter proofs that will result.

They are consistently named.  For example, in the simp* example

  simp31l $p |- ( ( ta /\ et /\ ( ( ph /\ ps ) /\ ch /\ th ) ) -> ph ) $=

the

  "3" means the 3rd member of the outermost triple conjunction,
  "1" means the 1st member of the next-nested triple conjunction, and
  "l" means the left member of the next-nested double conjunction.

In general, "1", "2", "3" refer to triple conjunctions and "l", "r" to
double conjunctions.

The following example shows the syl*anc naming convention:

  syl231anc.1 $e |- ( ph -> ps ) $.
  syl231anc.2 $e |- ( ph -> ch ) $.
  syl231anc.3 $e |- ( ph -> th ) $.
  syl231anc.4 $e |- ( ph -> ta ) $.
  syl231anc.5 $e |- ( ph -> et ) $.
  syl231anc.6 $e |- ( ph -> ze ) $.
  syl231anc.7 $e |- ( ( ( ps /\ ch ) /\ ( th /\ ta /\ et ) /\ ze ) -> si ) $.
  syl231anc $p |- ( ph -> si ) $= ... $.

the "231" (a string of length 3) means there are 3 outer conjuncts
composed of a double ("2"), a triple ("3"), and a "unary" ("1")
conjunction respectively.

In a proof, the simp* and syl*anc theorems would be used in the following way:
        ...
   sylxxxanc.1=simpxxx       |- (main antecedent) -> (piece of ref'd antecdnt)
   sylxxxanc.2=simpxxx       |- (main antecedent) -> (piece of ref'd antecdnt)
   sylxxxanc.3=simpxxx       |- (main antecedent) -> (piece of ref'd antecdnt)
   sylxxxanc.4=(ref'd thm)   |- (ref'd antecent) -> (ref'd theorem)
 (target)=sylxxxanc        |- (main antecedent) -> (ref'd theorem)
        ...


The new procedure
-----------------

You can use the following procedure is used to create this kind of proof,
without having to remember the names of the simp* and syl*anc theorems.
(See divdivdiv below for an example.)

  1. Use syl to connect the main antecedent to the reference theorem.

  2. Assign the referenced theorem to syl.2.

  3. Apply 3jca and jca as many times as necessary to break up the
     expression now assigned to syl.1 into individual conjuncts on the
     right-hand side.  (Hint: if in doubt whether the outermost conjunction
     is double or triple, try 3jca first.)

  4. Use IMPROVE ALL to assign the simp* theorems to the jca and 3jca
     hypotheses automatically.

When the proof is done, use 'MINIMIZE_WITH syl*anc / BRIEF'.  The
syl*anc theorems will automatically replace the syl and shorten the
proof.

Sometimes the jca and 3jca breakups won't be proved automatically
with MINIMIZE_WITH, such as when the closure of a compound operation
is needed.  In that case, repeat the above 4 steps recursively:
apply syl, assign the closure theorem to syl.2, and break up
syl.1 with 3jca and jca.


Guidelines for new theorems
---------------------------

In order for new theorems to be compatible with uses of syl*anc in
theorems referencing them, the important thing (for up to 9 antecedent
conjuncts) is that

  1. In the theorem's statement, the conjunct nesting level in the
     antecedent should be no greater than one.

For example, ( ( ph /\ ps ) /\ ( ch /\ th ) ) is acceptable,
whereas ( ( ( ph /\ ps ) /\ ch ) /\ th ) is not.  (For more than 9
conjuncts, which is rare, I don't have a guideline.)

The simp* theorems provide for all possibilities up to 2 nesting levels.
Thus additional antecedent conjuncts can be present as needed inside of
the proof (as would be the case, for example, before applying
pm2.61dan).  The main guideline for antecedents inside of proofs is that

  2. In a proof step, the conjunct nesting level in the antecedent
     should be no greater than two.

The maximum number of antecedent conjuncts inside of a proof to which
simp* can apply is 27, achieved with triple conjunctions nested 2
levels deep.


Obsolete theorems
-----------------

I plan to make the following changes within a week or so.

The following 4 existing theorems will be replaced by new ones with a
different hypothesis order.  (The new ones are in set.mm already.)

  Obsolete    Will be replaced by    |    Obsolete    Will be replaced by
  --------    -------------------    |    --------    -------------------
                                     |
  sylanc      syl11anc               |    syl3anc     syl111anc
  syl2anc     syl22anc               |    syl3an2c    syl13anc

The following 49 existing theorems will be renamed for naming consistency.

  Old name    New name will be       |    Old name    New name will be
  --------    ----------------       |    --------    ----------------
                                     |
  sylan31c    syl21anc               |    3simp1i     simp1i
  sylan32c    syl12anc               |    3simp2i     simp2i
  pm3.26im    simplim                |    3simp3i     simp3i
  pm3.27im    simprim                |    3simp1d     simp1d
  pm3.26      simpl                  |    3simp2d     simp2d
  pm3.26i     simpli                 |    3simp3d     simp3d
  pm3.26d     simplld                |    3simp1bi    simp1bi
  pm3.26bi    simplbi                |    3simp2bi    simp2bi
  pm3.27      simpr                  |    3simp3bi    simp3bi
  pm3.27i     simpri                 |    3simp1l     simp1l
  pm3.27d     simprd                 |    3simp1r     simp1r
  pm3.27bi    simprbi                |    3simp2l     simp2l
  pm3.26bda   simprbda               |    3simp2r     simp2r
  pm3.27bda   simplbda               |    3simp3l     simp3l
  pm3.26bi2   simplbi2               |    3simp3r     simp3r
  pm3.26bi2VD simplbi2VD             |    3simp11     simp11
  3simp1      simp1                  |    3simp12     simp12
  3simp2      simp2                  |    3simp13     simp13
  3simp3      simp3                  |    3simp21     simp21
  3simpl1     simpl1                 |    3simp22     simp22
  3simpl2     simpl2                 |    3simp23     simp23
  3simpl3     simpl3                 |    3simp31     simp31
  3simpr1     simpr1                 |    3simp32     simp32
  3simpr2     simpr2                 |    3simp33     simp33
  3simpr3     simpr3                 |


Example - proof of theorem divdivdiv
------------------------------------

Compare:
http://us2.metamath.org:88/mpegif/divdivdiv.html - new
http://us2.metamath.org:88/mpegif/divdivdivOLD.html - old

As an example, I re-proved the existing divdivdiv (which I remember as
being somewhat tedious to achieve a short proof for using the traditional
method).  You can compare them as divdivdiv vs. divdivdivOLD.  The
uncompressed proof of divdivdiv is about 25% larger than that of
divdivdivOLD, but the compressed proof is slightly smaller and there are
fewer steps on the web page.

As an example in this proof, one of the steps needed was:

   555         eqtrd.2=?            $? |- ( ( ( A e. CC /\ ( B e. CC /\ B
  =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> (
                  ( A / B ) x. ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) )

The consequent matches divmuldiv.  To connect the antecedent, I
performed the following steps:

  assign last syl
  assign last divmuldiv

obtaining:

   558           syl.1=?              $? |- ( ( ( A e. CC /\ ( B e. CC /\
  B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) ->
    ( ( A e. CC /\ D e. CC ) /\ ( ( B e. CC /\ B =/= 0 ) /\ ( C e. CC /\ C
                                                             =/= 0 ) ) ) )

Next I broke up the consequent with jca's:

  assign last jca
  improve all
  assign last jca
  improve all
    ...

until no conjunctions remained in the consequent.

Finally, I did "MINIMIZE_WITH * /BRIEF" resulting in:

   648           sylXanc.1=simpll     $p |- ( ( ( A e. CC /\ ( B e. CC /\
  B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) ->
                                                                 A e. CC )
   673           sylXanc.2=simprrl    $p |- ( ( ( A e. CC /\ ( B e. CC /\
  B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) ->
                                                                 D e. CC )
   699           sylXanc.3=simplr     $p |- ( ( ( A e. CC /\ ( B e. CC /\
  B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) ->
                                                  ( B e. CC /\ B =/= 0 ) )
   725           sylXanc.4=simprl     $p |- ( ( ( A e. CC /\ ( B e. CC /\
  B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) ->
                                                  ( C e. CC /\ C =/= 0 ) )
   730           syl22anc.5=divmuldiv $p |- ( ( ( A e. CC /\ D e. CC ) /\
  Press <return> for more, Q <ret> quit, S <ret> scroll, B <ret> back up...
  ( ( B e. CC /\ B =/= 0 ) /\ ( C e. CC /\ C =/= 0 ) ) ) -> ( ( A / B ) x.
                               ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) )
   731         eqtrd.2=syl22anc     $p |- ( ( ( A e. CC /\ ( B e. CC /\ B
  =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> (
                  ( A / B ) x. ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) )

Note that syl, jca, and 3jca were the only theorems I had to know the
names of.  I didn't need to remember the names of simprrl, syl22anc,
etc. since they will happen automatically.


(End of 19-Mar-2012 A new approach to antecedent manipulation)

-------------------------------------------------------------------------
-------------------------------------------------------------------------



(15-Sep-2011) Partial functions and restricted iota
              -------------------------------------

This is a proposal to add definition df-riota below.  As usual, any
comments are welcome.

The current iota definition is

  df-iota $a |- ( iota x ph ) = U. { y | { x | ph } = { y } }

Consider a poset (partially ordered set) with a base set B and a
relation R.  The supremum (least upper bound) of a subset S (of the base
set B) is the unique member of B (if there is one) such that

    A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z )

The LUB is a partial function on the subsets of B:  for some subsets it
may "exist" (in the textbook sense of this verb), and for others it may
not.  There are several ways to deal with partial functions.  One useful
way is to define "the value exists" as meaning "the value is a member of
B" i.e.  "E. x e. B ...", since the domain of discourse is the base set
B. This is analogous to the set-theoretical meaning of "exists" meaning
"is a member of the domain of discourse _V."

Now, it turns out that the set ~P U. B is not a member of B, which can
be proved without invoking the Axiom of Regularity (see theorem
pwuninel).  We can define an "undefined value" function:

  ( Undef ` B ) = ~P U. B    so that   ( Undef ` B ) e/ B

This device can be used to work with partial functions generally, the
case of posets being one example.

The problem with using the standard iota for defining the LUB is that
the iota returns the empty set (/) when it is not meaningful, and (/)
could be a member of B.  In order to get a guaranteed non-member of B
when the LUB doesn't "exist", we can't (easily) use the standard iota
but need the following awkward and non-intuitive definition of LUB:

  ( lub ` S ) = { t | E. u ( u = { x e. B | ( A. y e. S y R x
                              /\ A. z e. B ( A. y e. S y R z -> x R z ) ) }
                      /\ t = if ( E. v u = { v } , U. u , ( Undef ` B ) ) ) }

(The purpose of the "{ t | E. u ... }" is to avoid having to repeat
"A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z )" twice,
which would be required if we used df-iota.)

We can introduce a "restricted iota" as follows:

  df-riota $a |- ( iota x e. A ph ) = if ( E! x e. A ph ,
                    ( iota x ( x e. A /\ ph ) ) , ( Undef ` A ) )

Using df-riota, the LUB becomes

  ( lub ` S ) = ( iota x e. B
         ( A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z ) ) )

Thus df-riota above, even though somewhat awkward, seems to provide the
most useful tool.  Some properties we would have are:

  iotariota $p |- ( iota x ph ) = ( iota x e. _V ph )

  riotaiota $p |- ( E! x e. A ph -> ( iota x e. A ph ) = ( iota x ( x e. A
      /\ ph ) ) )

Most importantly, the "closure" of restricted iota is equivalent to
its "existence" in the textbook sense:

  riotaclb.1 $e |- A e. _V $.
  riotaclb $p |- ( E! x e. A ph <-> ( iota x e. A ph ) e. A )

thus making it very useful for working with partial functions.


(End of 15-Sep-2011 restricted iota proposal)
---------------------------------------------



(6-Sep-2011) cnaddablNEW vs. cnaddabl2NEW (continuation of 5-Sep-2011)
             ---------------------------------------------------------

Yesterday, I wrote:  "For some things like proving cnaddablxNEW (the new
version of cnaddabl), it seems we have to reference the actual
finite-sequence structure."  Today I added a function to construct an
explicit member of a structure class given its components, called
StrBldr (df-strbldr), and I proved cnaddablxNEW with a
"scaffold-independent" notation, mainly to demonstrate a way to do it.

An advantage of using StrBldr is that we can later change the
definition of df-struct (e.g. to use a different ordered n-tuple
definition, such as a sequence on ordinals) without changing theorems.

A disadvantage of using StrBldr is longer expressions (at least for this
example).  It also hides the actual "thing" that the complex addition
group "is" in an abstract (and non-standard) way.

cnaddablxNEW is an example of the "scaffold-dependent" method:

cnaddablx.1NEW $e |- G = { <. 1 , CC >. , <. 2 , + >. } $.
cnaddablxNEW $p |- G e. AbelNEW $= ... $.

Note that the explicit structure { <. 1 , CC >. , <. 2 , + >. }, using
only elementary set-theory symbols, is shown.  In other words, the
"scaffold" is a 2-member finite sequence on NN, which we show
explicitly.  This would change if we adopted a different n-tuple
definition.

cnaddablNEW is an example of the "scaffold-independent" method:

cnaddabl.1NEW $e |- G = StrBldr ( 2 , g , ( ( base ` g ) = CC /\ ( +g ` g ) = + ) ) $.
cnaddablNEW $p |- G e. AbelNEW $= ... $.

In cnaddablNEW, there is no reference to structure holding the group
components (base and operation).  If later we changed the definition of
Struct to use a different n-tuple definition, the theorem cnaddabl2NEW
would not change.

I don't know which might be better for long-term use.  In any case,
these theorems (in my mathbox) are experimental, and I don't plan to put
them into the main part of set.mm in the near future.  As usual, any
comments are welcome.


(5-Sep-2011) Structures for groups, rings, etc.
             ----------------------------------

In general, the way we extend structures in the present set.mm is
awkward.  For example, an inner product space is not a vector space in
the strict formal sense, but we must undergo a mapping to apply vector
space theorems to inner product spaces.

For small structures like groups, the "trick" of specifying the group
completely by its operation makes the statement of some theorems
shorter.  We extract the base set of the group from the range of this
operation.  But as structures get more complex (more specialized), we
have no uniform way to extend them.

With a view towards more flexibility in the future, I propose a new
format for structures using what I call "extensible structure builders".
The structures will be finite sequences, i.e. functions on ( 1 ...  N )
where N e. NN.  For example, a group consists of any finite sequence
with at least 2 members, a base and the group operation.  A ring is any
(Abelian) group specialized with a a multiplication operation as the 3rd
member.  Thus any ring is also a group, and any theorem about groups
automatically applies to rings.  This kind of extensible specialization
becomes important with things like the transition from lattices to
ortholattices, vector spaces to inner product spaces, etc.  In other
words, "every Boolean algebra is a lattice" becomes a literal, rigorous
statement, not just an informal way of expressing a homomorphism.

I propose to use finite sequences instead of ordered n-tuples because
the theory of functions is well-developed, whereas ordered n-tuples
become awkward with more than 3 or 4 members, involving e.g.  ( 1st ` (
2nd ` ( 1st...))) to extract a member.  I made them sequences on NN
rather than om (omega) because we have a richer set of theorems for NN.
But in principle om could also be used, and it would be closer to the
ZFC axioms, and the Axiom of Infinity would not be needed to prove say
simple theorems of group theory.

In my mathbox, I have shown a proposed definition of the extensible
structure builder Struct along with some theorems about groups and rings
"translated" to use it.  For the most part, the translations were
straightforward and mechanical, as can be seen by comparing theorems
with "NEW" after them with to their originals already in set.mm.

In df-struct, we define Struct(N,f,ph), where N is an integer and ph is
a wff containing f as a free variable.  In Struct(N,f,ph), f is a
bound variable.

  df-struct $a |- Struct ( N , f , ph ) = { f | ( E. m e. NN ( N <_ m
                     /\ f Fn ( 1 ... m ) ) /\ ph ) } $.

In other words, a structure with say 2 members (such as a group) is any
sequence with _at least_ 2 members whose first 2 members satisfy the
requirements specified by wff ph.  This means we can extend (specialize)
a group to become a ring without destroying its property of being a
group.

When possible, it desirable not to reference members of a structure
directly via the sequence values (which is highly dependent on the
definition of Struct(N,f,ph)).  Instead, we can define "extractors" of
the components.  This way, if in the future we decide on a different
"scaffold" (such as a sequence on finite ordinals instead of NN) most
theorems will remain unchanged.  For example, for groups and rings, we
define

  df-base  $a |- base = ( f e. _V |-> ( f ` 1 ) ) $.
  df-plusg $a |-   +g = ( f e. _V |-> ( f ` 2 ) ) $.
  df-mulr  $a |-   .r = ( f e. _V |-> ( f ` 3 ) ) $.

Then we can reference "(base ` G)" rather than "(G ` 1)".

I put df-struct in my mathbox.  To illustrate it, I added the
corresponding new definitions df-grpNEW, df-ablNEW, and df-ringNEW
(which define classes GrpNEW, AbelNEW, and RingNEW) along with several
dozen theorems taken from the main set.mm, with NEW appended to their
names.  We have the nice inclusions

     GrpNEW C_ AbelNEW C_ RingNEW,

making it easy to reuse theorems for more general structures with less
general structures.  These inclusions can be continued through division
rings, vectors spaces, and inner product spaces.

You may want to compare the "scaffold-independent" isgrpiNEW to the old
isgrpi.  There is also a "scaffold-dependent" version isgrpixNEW (the
"x" after "isgrpi" means "explicit" i.e. scaffold-dependent).  For some
things like proving cnaddablxNEW (the new version of cnaddabl), it seems
we have to reference the actual finite-sequence structure.  I have some
ideas for avoiding that, but so far they seem to make things more
complex rather than simpler, and I didn't put them in.  (Later - I did
put them in; see df-strbldr and related theorems and note of
6-Sep-2011.)

Also note the analogous zaddablNEW showing the integers are a group:
we do not have to restrict the "+" operation to ZZ.

Comments are welcome.



(31-Aug-2011) Canonical conjunctions - followup
              ---------------------------------

It looks like there is no strong consensus on any of the proposed
methods, so for the time being we'll continue to use whatever ad hoc
parenthesization seems to fit the need at hand.  Perhaps this is best
anyway.




(18-Aug-2011) Canonical conjunctions
              ----------------------

Background
----------

A significant portion of many proofs consists of manipulating
antecendents to rearrange the parenthesization of conjuncts, change the
order of conjuncts, etc.  One of the problems has to do with the many
ways that a conjunction can be parenthesized.  In the case of df-an
(binary conjuncts), the number of parenthesizations for 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, and 12 conjuncts are 1, 2, 5, 14, 42, 132, 429, 1430,
4862, 16796, and 58786 respectively (Catalan numbers).  For mixed df-an
and df-3an it is of course even larger.  It is impractical to have
utility theorems for all possible transformations from one
parenthesization to another; e.g. we would need 42 * 41 = 1722 utility
theorems, analogous to anasss, to handle all possible transformations of
6-conjunct antecedents with df-an parenthesizations.  So we compromise
and use a sequence of manipulations with a much smaller set of
transformations in order to achieve the desired transformation.

To reduce the number of such antecedent manipulation steps as well as
the number of special-purpose utility theorems, I propose that for all
new theorems, we adopt a canonical parenthesization for all conjunctions
(whether in antecedents or not).  Of course there would be certain early
exceptions such as anass that are intrinsically about the
parenthesization.  Over time, existing theorems and their proofs can
also be retrofitted to shorten them.  I believe this can significantly
reduce the size of some proofs as well as make them easier to follow,
since the reader doesn't have to mentally skip over increasingly nested
steps doing nothing but boring antecedent transformations.


Summary of proposals
--------------------

I will describe three possible methods.  (A fifth method from a
Metamath Google Groups posting by FL on 9-Aug-2011, is a hybrid of
Method #1 and Method #2 and not presented here.)

Method #1:  Same nesting level
Advantages:  Arguably most "natural"; fewest changes to retrofit set.mm
Disadvantages:  Neither minimum parentheses nor minimum nesting

Method #2:  Recursive grouping
Advantages:  Minimal parentheses; simple algorithm
Disadvantages:  No minimum nesting

Method #3:  Symmetry
Advantages:  Minimal parentheses; minimal nesting under constraint of
    symmetry
Disadvantages:  Apparently not "natural" (rarely used in current
    set.mm); no simple algorithm to derive n+1st case from nth case;
    nesting is not theoretical minimum.

Method #4:  Attempt at minimal nesting
Advantages:  Appears to have minimum parentheses and minimum nesting.
Disadvantages:  Not inuitive; somewhat complex algorithm.


Method #1:  Same nesting level
------------------------------

I collected all antecedents using 2an and/or 3an, with the results
summarized in Table 1 below.  The empirically most frequent patterns are
arguably the "most natural".  For 2 through 6 conjuncts, these are:

  2210 (ph/\ps)
  1005 (ph/\ps/\ch)
   239 ((ph/\ps)/\(ch/\th))
    58 ((ph/\ps/\ch)/\(th/\ta))
    24 ((ph/\ps/\ch)/\(th/\ta/\et))

(For 7 through 12 conjuncts there isn't enough data for a clear pattern
to emerge.)  The pattern here is that all conjuncts are at the same
nesting level, at the expense of both minimum nesting and minimum
parentheses.  The rules would be:

1. Using 3an as much as possible, put all conjuncts at the same nesting
level.

2. At a given level, 2an's go to the right of 3an's.

For 2 through 12 conjuncts, these rules give:

(ph/\ps)
(ph/\ps/\ch)
((ph/\ps)/\(ch/\th))
((ph/\ps/\ch)/\(th/\ta))
((ph/\ps/\ch)/\(th/\ta/\et))
((ph/\ps/\ch)/\(th/\ta)/\(et/\ze))
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si))
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))
(((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si)/\(rh/\la)))
(((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si/\rh)/\(la/\ka)))
(((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si/\rh)/\(la/\ka/\mu)))

Table 1:  Frequency of parenthesization patterns in set.mm

#cases parenthesization
------ ----------------
  2210 (ph/\ps)
  1005 (ph/\ps/\ch)
    62 (ph/\(ps/\ch))
    74 ((ph/\ps)/\ch)
     8 ((ph/\ps)/\ch/\th)
    13 (ph/\(ps/\ch)/\th)
    50 (ph/\ps/\(ch/\th))
   101 ((ph/\ps/\ch)/\th)
   128 (ph/\(ps/\ch/\th))
     6 (((ph/\ps)/\ch)/\th)
     6 (ph/\((ps/\ch)/\th))
     7 (ph/\(ps/\(ch/\th)))
    10 ((ph/\(ps/\ch))/\th)
   239 ((ph/\ps)/\(ch/\th))
     1 (ph/\ps/\(ch/\th)/\ta)
     4 (ph/\ps/\(ch/\th/\ta))
     5 ((ph/\ps/\ch)/\th/\ta)
     1 (((ph/\ps)/\ch/\th)/\ta)
     1 ((ph/\ps)/\ch/\(th/\ta))
     2 ((ph/\ps/\(ch/\th))/\ta)
     2 (ph/\((ps/\ch/\th)/\ta))
     4 ((ph/\(ps/\ch/\th))/\ta)
     6 ((ph/\ps)/\(ch/\th)/\ta)
    19 (ph/\(ps/\ch)/\(th/\ta))
    21 ((ph/\ps)/\(ch/\th/\ta))
    58 ((ph/\ps/\ch)/\(th/\ta))
     1 (((ph/\ps)/\ch)/\(th/\ta))
     1 ((ph/\(ps/\(ch/\th)))/\ta)
     1 ((ph/\ps)/\((ch/\th)/\ta))
     1 (ph/\(((ps/\ch)/\th)/\ta))
     1 (ph/\((ps/\(ch/\th))/\ta))
     2 ((ph/\(ps/\ch))/\(th/\ta))
     4 (ph/\((ps/\ch)/\(th/\ta)))
     5 (((ph/\ps)/\(ch/\th))/\ta)
     1 (ph/\(ps/\ch/\th)/\(ta/\et))
     2 (((ph/\ps/\ch)/\th/\ta)/\et)
    24 ((ph/\ps/\ch)/\(th/\ta/\et))
     1 (((ph/\ps/\ch)/\th)/\(ta/\et))
    13 ((ph/\ps)/\(ch/\th)/\(ta/\et))
     1 (((ph/\ps/\ch)/\th/\ta)/\et/\ze)
     1 ((((ph/\ps)/\ch)/\(th/\ta))/\et)
     1 (((ph/\ps)/\ch)/\(th/\(ta/\et)))
     1 ((ph/\(ps/\ch))/\((th/\ta)/\et))
     1 (ph/\(ps/\((ch/\th)/\(ta/\et))))
     2 ((ph/\(ps/\ch))/\(th/\(ta/\et)))
     2 ((ph/\ps)/\((ch/\(th/\ta))/\et))
     3 (((ph/\ps)/\(ch/\th))/\(ta/\et))
     3 (((ph/\ps)/\ch)/\((th/\ta)/\et))
     7 ((ph/\ps)/\((ch/\th)/\(ta/\et)))
     1 (ph/\((ps/\ch/\th)/\(ta/\et)/\ze))
     3 (((ph/\ps)/\(ch/\th)/\(ta/\et))/\ze)
     4 ((ph/\ps)/\(ch/\th/\ta)/\(et/\ze/\si))
     2 (((ph/\ps)/\(ch/\th))/\((ta/\et)/\ze))
     2 ((ph/\(ps/\ch))/\((th/\ta)/\(et/\ze)))
     2 (ph/\((ps/\ch)/\((th/\ta)/\(et/\ze))))
     1 (ph/\((ps/\ch/\th)/\(ta/\et)/\(ze/\si)))
     1 (ph/\(ps/\ch/\th)/\((ta/\et)/\(ze/\si)))
     1 ((ph/\ps/\ch)/\th/\(ta/\et/\(ze/\si/\rh)))
     1 (((ph/\ps)/\(ch/\th)/\(ta/\et))/\(ze/\si))
     2 ((((ph/\ps)/\(ch/\th))/\(ta/\et))/\(ze/\si))
     4 ((ph/\ps)/\((ch/\((th/\ta)/\(et/\ze)))/\si))
     8 (((ph/\ps)/\(ch/\th))/\((ta/\et)/\(ze/\si)))
     1 ((((ph/\ps)/\(ch/\th))/\((ta/\et)/\(ze/\si)))/\((rh/\la)/\ka))
     1 ((((ph/\ps)/\ch)/\((th/\ta)/\et))/\(((ze/\si)/\rh)/\((la/\ka)/\mu)))


Method #2:  Recursive grouping
------------------------------

This parenthesization was first proposed by FL.  It is described by an
algorithm given by Andrew Salmon.

1. From left to right, group by three as many subexpressions as
possible.  Repeat until no more grouping occurs.

2. If there are two subexpressions, group them.

3. Done.

Example:
1 2 3 4 5 6 7 8 9 A B
(1 2 3) (4 5 6) (7 8 9) A B          Rule 1
( (1 2 3) (4 5 6) (7 8 9) ) A B      Rule 1
( ( (1 2 3) (4 5 6) (7 8 9) ) A B)   Rule 1

Example:
1 2 3 4
(1 2 3) 4      Rule 1
((1 2 3) 4)    Rule 2


For 2 through 12 conjuncts, we would have:

(ph/\ps)
(ph/\ps/\ch)
((ph/\ps/\ch)/\th)
((ph/\ps/\ch)/\th/\ta)
((ph/\ps/\ch)/\(th/\ta/\et))
((ph/\ps/\ch)/\(th/\ta/\et)/\ze)
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si))
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))
(((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\la)
(((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\la/\ka)
(((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\(la/\ka/\mu))



Method #3:  Symmetry
--------------------

This method has 3 rules:

  1. Minimum nesting.

  2. Minimum parentheses.

  3. Symmetrical parenthesization.

For n = 2 through 12 conjunctions, it appears that the following
parenthesizations are the only ones that satisfy these rules:

                                                      nesting sum
                         (ph/\ps)                            2
                       (ph/\ps/\ch)                          3
                    (ph/\(ps/\ch)/\th)                       6
                  (ph/\(ps/\ch/\th)/\ta)                     8
               ((ph/\ps/\ch)/\(th/\ta/\et))                 12
             ((ph/\ps/\ch)/\th/\(ta/\et/\ze))               13
          ((ph/\ps/\ch)/\(th/\ta)/\(et/\ze/\si))            16
        ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))          18
     ((ph/\ps/\ch)/\(th/\(ta/\et)/\ze)/\(si/\rh/\la))       22
   ((ph/\ps/\ch)/\(th/\(ta/\et/\ze)/\si)/\(rh/\la/\ka))     25
((ph/\ps/\ch)/\((th/\ta/\et)/\(ze/\si/\rh))/\(la/\ka/\mu))  30


An algorithm for method #3:
---------------------------

Here is an algorithm.  I don't know if there is a better one.

The groupings for 0 through 5 conjuncts must be memorized, although they
have justifications that aren't too hard.  For >= 6 conjuncts, there is
a recursive algorithm starting from 0 through 5.

(The 0 and 1 cases aren't really conjuncts but help define the
algorithm.  Alternately, we could start with 2 conjuncts and start the
recursion with n >= 8, but then cases 6 and 7 need to be memorized.)

0 through 5 conjuncts:                      Justification:

          n = 0             [null]          only 1 symmetric possibility
          n = 1               ph            only 1 symmetric possibility
          n = 2            (ph/\ps)         only 1 symmetric possibility
          n = 3          (ph/\ps/\ch)       only 1 symmetric possibility

          n = 4       (ph/\(ps/\ch)/\th)    the other symmetric possibility
                                            ((ph/\ps)/\(ch/\th) is longer

          n = 5     (ph/\(ps/\ch/\th)/\ta)  the other symmetric possibility
                                            ((ph/\ps)/\ch/\(th/\ta)) is longer

For n >= 6 conjuncts, conjoin (./\./\.) and (./\./\.) around n-6.

Another way to look at it:  start the with the case (n mod 6) from
above, then successively wrap in (./\./\.)...(./\./\.) until n conjuncts
are achieved.

                   ((ph/\ps/\ch)  /\  (th/\ta/\et))
                 ((ph/\ps/\ch)/\  th  /\(ta/\et/\ze))
              ((ph/\ps/\ch)/\  (th/\ta)  /\(et/\ze/\si))
            ((ph/\ps/\ch)/\  (th/\ta/\et)  /\(ze/\si/\rh))
         ((ph/\ps/\ch)/\  (th/\(ta/\et)/\ze)  /\(si/\rh/\la))
       ((ph/\ps/\ch)/\  (th/\(ta/\et/\ze)/\si)  /\(rh/\la/\ka))
  ((ph/\ps/\ch)/\  ((th/\ta/\et)  /\  (ze/\si/\rh))  /\(la/\ka/\mu))


Notes for method #3:
--------------------

1. I conjecture that the algorithm above always results in minimum
   nesting given the symmetry requirement, but I don't have a proof.

2. Unfortunately, in some cases a grouping with minimal nesting does
   not have symmetry.  For example,

        ((ph/\ps/\ch)/\(th/\ta/\et))

   has nesting sum 12, whereas the non-symmetrical

        ((ph/\ps)/\ch/\(th/\ta/\et))

   has nesting sum 11.

3. Minimum nesting + symmetry by themselves don't imply minimum
   parentheses.  For example, the following groupings for 6 conjuncts
   each have minimum nesting sum of 12, but only the first has
   minimum parentheses:

                 ((ph/\ps/\ch)/\(th/\ta/\et))
                 ((ph/\ps)/\(ch/\th)/\(ta/\et))

4. Minimum parentheses + symmetry by themselves do not necessarily imply
   minimum nesting.  For example, for 11 conjuncts, the nesting sum
   from the above algorithm is 25:

      ((ph/\ps/\ch)/\(th/\(ta/\et/\ze)/\si)/\(rh/\la/\ka))  25

   But there exist 3 other symmetrical groupings with same parentheses
   but more nesting:

      (((ph/\ps/\ch)/\th/\ta)/\et/\(ze/\si/\(rh/\la/\ka)))  27
      ((ph/\(ps/\ch/\th)/\ta)/\et/\(ze/\(si/\rh/\la)/\ka))  27
      ((ph/\ps/\(ch/\th/\ta))/\et/\((ze/\si/\rh)/\la/\ka))  27

5. I don't know if (say for some larger n) there exist other symmetric
   patterns with both minimum nesting and minimum parentheses.  If so, then
   the algorithm would become the definition of the grouping, not just the 3
   rules.

(End of method #3 discussion)


Method #4
---------
                                                         nesting sum
(ph/\ps)                                                     2
(ph/\ps/\ch)                                                 3
((ph/\ps)/\ch/\th)                                           6
((ph/\ps/\ch)/\th/\ta)                                       8
((ph/\ps/\ch)/\(th/\ta)/\et)                                11
((ph/\ps/\ch)/\(th/\ta/\et)/\ze)                            13
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si))                      16
((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))                  18
(((ph/\ps)/\ch/\th)/\(ta/\et/\ze)/\(si/\rh/\la))            22
(((ph/\ps/\ch)/\th/\ta)/\(et/\ze/\si)/\(rh/\la/\ka))        25
(((ph/\ps/\ch)/\(th/\ta)/\et)/\(ze/\si/\rh)/\(la/\ka/\mu))  30

Algorithm for method #4
------------------------

From grouping n to grouping n+1:

1. Let m = max nesting depth of grouping n.

2. If there is a wff at depth m-1 that is not a triple conjunction, let m = m-1

3. Find first wff (call it w) at depth m that is not a triple conjunction.

4. If w is a wff variable, change it to a conjunction of two wff variables.
   If w is a conjunction of two wff variables, change it to a conjunction of
   three wff variables.

Examples of algorithm for method #4
-----------------------------------

Example: start with  ((ph/\ps/\ch)/\th/\ta).
1. m = max nesting depth = 2.
2. Non-triple-conjunct th is at depth m-1 = 1, so m = m-1 = 1.
3. w = th
2. Change w to (th/\ta).  Result is ((ph/\ps/\ch)/\(th/\ta)/\et).

Example: start with  ((ph/\ps/\ch)/\(th/\ta)/\et).
1. m = max nesting depth = 2.
2. Non-triple-conjunct (th/\ta) is at depth m-1 = 1, so m = m-1 = 1.
3. w = (th/\ta)
2. Change w to (th/\ta/\et).  Result is ((ph/\ps/\ch)/\(th/\ta/\et)/\ze).

Example: start with  ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)).
1. m = max nesting depth = 2.
2. There is no non-triple conjunct at depth m-1 = 1, so m stays at 2.
3. w = ph
2. Change w to (ph/\ps).  Result is
   (((ph/\ps)/\ch/\th)/\(ta/\et/\ze)/\(si/\rh/\la)).

Example: start with  (ph/\ps).
1. m = max nesting depth = 1.
2. Non-triple-conjunct (ph/\ps) is at depth m-1 = 0, so m = m-1 = 0.
3. w = (ph/\ps)
2. Change w to (ph/\ps/\ch).  Result is (ph/\ps/\ch).

Example: start with  (ph/\ps/\ch).
1. m = max nesting depth = 1.
2. There is no non-triple conjunct at depth m-1 = 0, so m stays at 1.
3. w = ph
2. Change w to (ph/\ps).  Result is ((ph/\ps)/\ch/\th).

(End of method #4 discussion)


Examples
--------

(The examples use method #3 above and are analogous for the others)

As an example, to swap the 1st and 3rd conjuncts in a 4-conjunct antecedent,
we could have, in place of several ancomxxs forms, just:

    4com13.1 $e ( ( ph /\ ( ps /\ ch ) /\ th ) -> ta ) $.
    4com13   $p ( ( ch /\ ( ps /\ ph ) /\ th ) -> ta ) $=

To add an antecedent in the 3rd position to a 4-conjunct antecendent,
in place of the numerous adantlrl, adantlrr, etc. we would have just:

    4adant3.1 $e ( ( ph /\ ps /\ ch ) -> th ) $.
    4adant3   $p ( ( ( ph /\ ( ps /\ ta ) /\ ch ) -> th ) $=

There would be only one "import" statement for each conjunct count,
rather than e.g. imp41 through imp45 (plus others for the 3an cases) for
4 conjuncts:

    imp4.1 $e |- ( ph -> ( ps -> ( ch -> ( th -> ta ) ) ) ) $.
    imp4   $p |- ( ( ph /\ ( ps /\ ch ) /\ th ) -> ta ) $=

Some things may become problematic.  For example, suppose we have
( ( ta /\ et ) -> ps ) that we would normally inject into an antecedent
with a sylan-like theorem.  We would need multiple cases for various
conjunct counts in both hypotheses, to ensure that the antecedent of the
result stays in canonical form.  I don't know how big a problem this
will be.

    syl32an1.1 $e |- ( ( ph /\ ps /\ ch ) -> th ) $.
    syl32an1.2 $e |- ( ( ta /\ et ) -> ph ) $.
    syl32an1   $p |- (  ta /\ ( et /\ ps ) /\ ch ) -> th ) $=


Additional comments
-------------------

It is interesting that (with methods 2 and 3 above) df-3an usually
halves the number of parentheses in the canonical forms:

                 conjuncts:  2 3 4 5  6  7  8  9 10 11 12
                 ----------  ----------------------------
      parens w/ df-an only:  2 4 6 8 10 12 14 16 18 20 22
  parens w/ df-an + df-3an:  2 2 4 4  6  6  8  8 10 10 12

This is an argument in favor of keeping df-3an rather than having the
simpler syntax with df-an only.

If a theorem and its proof uses only conjuncts in canonical form, it
might be relatively straightforward to retrofit a possible future
df-4an (or even revert to just df-an) just by changing the canonical
forms and the utility theorems handling them.

Jonathan Ben-Naim has df-4an (called df-bnj17) in his mathbox.  I would
want to ponder whether its benefits outweigh its drawbacks before
moving it into the main set.mm.  My initial objection was the large
number of additional utility theorems that would be needed for
transformations back and forth to df-an and df-3an.  That might become
less important if we start using canonical parenthesizations.  If we add
df-4an, I think the table above would be:

                          conjuncts:  2 3 4 5  6  7  8  9 10 11 12
                          ----------  ----------------------------
               parens w/ df-an only:  2 4 6 8 10 12 14 16 18 20 22
           parens w/ df-an + df-3an:  2 2 4 4  6  6  8  8 10 10 12
  parens w/ df-an + df-3an + df-4an:  2 2 2 4  4  4  6  6  6  8  8

so the parenthesis savings, while nice, have less impact than with
adding just df-3an.  We could also go to the extreme and add
df-5an, ..., df-12an so we always have just 2 parentheses, but (among
other things) the run time of the current "improve all" algorithm in
MM-PA would grow exponentially; already the default search limit times
out when there are too many nested df-3ans.  It is possible to improve
that algorithm but it would take some work.

(End of 18-Aug-2011 canonical conjunctions proposal)
-------------------------------------------------------------------------


(22-Jun-2009) Gerard Lang's proof that ax-groth implies ax-pow was
apparently unrecognized.*  The page
http://en.wikipedia.org/wiki/Tarski-Grothendieck_set_theory
(which calls ax-groth "Tarski's axiom") mentions only that
"Tarski's axiom also implies the axioms of Infinity and Choice."
Perhaps someone should update that page. :)
(In the interest of objectivity I do not personally edit
Wikipedia references to Metamath.)

* Later (24-Jun-2009):  Gerard pointed out that this was mentioned by
Bob Solovay here:

 http://www.cs.nyu.edu/pipermail/fom/2008-March/012783.html

I updated the Wikipedia entry to cite that.


(11-Dec-2008) I resynchronized Jeff Hoffman's nicod.mm to the recent
label changes and made it an official part of set.mm.  In connection
with this, the NAND connective (Sheffer stroke) was added to set.mm.

Jeff's original nicod.mm can be found at

  http://groups.google.com/group/metamath


(18-Nov-2008) Some stragglers I missed in yesterday's change were
updated, so if you downloaded yesterday's set.mm, you should refresh it.

   Old       New

   cdavalt   cdaval
   cdaval    cdavali
   unbnnt    unbnn3
   frsuct    frsuc
   fr0t      fr0g
   rdg0t     rdg0g
   ssonunit  ssonuni
   ssonuni   ssonunii
   eqtr3t    eqtr3
   eqtr2t    eqtr2
   eqtrt     eqtr
   3eqtr4r   3eqtr4ri
   3eqtr3r   3eqtr3ri
   3eqtr2r   3eqtr2ri
   3eqtr4r   3eqtr4ri
   3eqtr3r   3eqtr3ri
   3bitr3r   3bitr3ri
   3bitr4r   3bitr4ri
   3bitr2r   3bitr2ri
   biimpr    biimpri
   biimp     biimpi
   impbi     impbii

(17-Nov-2008) Many labels in set.mm were changed to conform to the
following convention:  an inference version of a theorem is now always
suffixed with "i", whereas the closed theorem version has no suffix.
For example, "subcli" and "subcl" are the new names for the old "subcl"
and "subclt" respectively.

Also, the inference versions of the various transitive laws now have an
"i" suffix, such as "eqtr3i" (for the old "eqtr3"), "bitr4i" (for the
old "bitr4"), etc.  This will make them consistent with the "i"/"d"
convention for inferences and deductions ("eqtr3i"/"eqtr3d", etc.)

There were 1548 labels involved in this change.  They are documented at
the top of the set.mm file.

For people used to the old labels, it will take some practice to get
used to this change, but I think it will be better in the long term
since the database now conforms to a single standard.

If anyone is using set.mm for something not in their sandboxes, you can
contact me for a script that updates the labels with these changes.


(14-Nov-2008) The following frequently-used theorems were renamed for
better consistency and to avoid confusion with negative numbers (thanks
to Stefan Allan for the suggestion):

    Old       New

    nega      notnot2
    negai     notnotri
    negb      notnot1
    negbi     notnoti
    negbii    notbii
    negbid    notbid
    pm4.13    notnot
    pm4.11    notbi


(8-Sep-2008) Although at first glance expimpd and expdimp seem rather
specialized, they actually amazingly shorten over 80 proofs, so with
them the net size of set.mm is reduced.


(1-Sep-2008) I am phasing in "A e. Fin" in place of the current
"E. x e. om { A } ~~ x" to express "A is finite".  The latter idiom is
now used frequently enough so that the net size of set.mm will hopefully
be reduced as a result.


(31-Aug-2008) I did a number of revisions to the Unicode font characters
so that all symbols now display in the Opera browser as well as Firefox.


(22-May-2008) Yesterday's derivation of axiom ax-4 from the others
required new versions of axioms ax-5 and ax-6.  The old versions were
renamed ax-5o and ax-6o.  Theorems ax5o and ax6o derive axioms
ax-5o/ax-6o from the new ax-5/ax-6; theorems ax5 and ax6 show the
reverse derivations.

The organization of the axioms in set.mm has been changed.  The new
complete set of non-redundant axioms is now introduced in a single place
in set.mm in a new section called "Predicate calculus axiomatization".
(Before, they were scattered throughout, introduced as they were
needed.)  We immediately derive ax-4 and the old ax-5 and old ax-6 (now
called ax-5o and ax-6o) as theorems ax4, ax5o, and ax6o.

The next section in set.mm, "Predicate calculus without distinct
variables", has the original gentle derivations from ax-4, ax-5o, and
ax-6o, and eventually the equality theorems not needing ax-17.  The idea
here is that as long as an inexperienced reader accepts ax-4 a priori,
there is no need to go through the advanced, $d-using proof of ax4.
This also provides us with more meaningful "proved from axioms" lists
for the section without distinct variables, without mention of the ax-17
etc. used to prove ax4.

We finally introduce the "normal" use of ax-17 in the section "Predicate
calculus with distinct variables" with essentially the same organization
as before.

The reason for proving ax4 at the beginning, and not after say the old
place where ax-17 used to be, is to conform to the following convention,
mentioned in the comment of ax4:

  Note:  All predicate calculus axioms introduced from this point forward
  are redundant.  Immediately before their introduction, we prove them
  from earlier axioms to demonstrate their redundancy.  Specifically,
  redundant axioms ~ ax-4 , ~ ax-5o , ~ ax-6o , ~ ax-9o , ~ ax-10o ,
  ~ ax-11o , ~ ax-15 , and ~ ax-16 are proved by theorems ~ ax4 , ~ ax5o ,
  ~ ax6o , ~ ax9o , ~ ax10o , ~ ax11o , ~ ax15 , and ~ ax16 .

so that the proof of theorem ax4 can't have an accidental circular
reference to axiom ax-4 (which would be possible if we put the ax4 proof
later in the development).

The Metamath Proof Explorer Home Page has been updated with the new set
of non-redundant (as far as we know) predicate calculus axioms that
eliminates axiom ax-4.

With ax-4 omitted from the official list of non-redundant axioms, we no
longer have the former "pure" predicate calculus subsystem, that used to
be ax-4/ax-5o/ax-6o, as part of the non-redundant list.  Therefore it no
longer makes sense to subdivide the axioms into separate groups on the
MPE Home Page, and I combined them into one big table.  I moved the
description of the "pure" predicate calculus subsystem to the last entry
of the subsystem table
http://us2.metamath.org:8888/mpeuni/mmset.html#subsys

-----

On another matter, the user sandboxes have been moved to the end of
set.mm as suggested by O'Cat.  Unfortunately, this means the software
thinks they are in the "Hilbert Space Explorer" section during the web
page generation.  This will be a minor cosmetic inconvenience until I
address this.


(21-May-2008) With slightly modified ax-5 and ax-6, ax-4 becomes
redundant as shown by theorem ax4.  The ax-5 and ax-6 modifications have
the same total length as the old ones, renamed to ax-5o and ax-6o.


(17-May-2008) Axiom ax-10 was shortened.  The previous version was
renamed ax-10o.  Theorem ax10o shows that the previous version can be
derived from the new ax-10.  The Metamath Proof Explorer Home Page has
been updated to use the shortened axiom.


(14-May-2008) I am hoping that the supremum df-spw for weak orderings
will end up being easier to use in general than df-sup, because it
doesn't need a separate hypothesis to show that the supremum existence
condition is met.  Instead, the supremum exists iff the supw value
belongs to the relation's field.  If this turns out to be useful, I may
rethink the definition of df-sup as well.


(12-May-2008) The following frequently-used labels have been changed
to be slightly less cryptic and more consistent:

          old       new

12-May-08 a4w1      a4eiv
12-May-08 a4w       a4imev
12-May-08 a4c1      a4imed
12-May-08 a4c       a4ime
12-May-08 a4b1      a4v
12-May-08 a4b       a4imv
12-May-08 a4at      a4imt
12-May-08 a4a       a4im

For the new labels, "a4" means related to ax-4, "im" means the implicit
substitution hypothesis needs to be satisfied in only one direction, "i"
means inference, "e" means existential quantifier version, "v" means
distinct variables eliminate a bound-variable hypothesis, "d" means
deduction, and "t" means closed theorem.


(6-May-2008) The definitions of +oo and -oo (df-pnf and df-mnf) were
changed so that the Axiom of Regularity is not required for their
justification.  Instead, we use Cantor's theorem, as shown in
pnfnre, mnfnre, and pnfnemnf.

A standard version of the Axiom of Infinity, ax-inf2, has been added to
set.mm.  It is derived from our version as theorem axinf2, using
ax-inf and ax-reg.  I broke out ax-inf2 as a separate axiom so that
we can more easily identify "normal" uses of Regularity.  Before, this
was hard to do because any reference to omex would automatically
include Regularity as one of the axioms used.


(21-Apr-2008) Paul Chapman has replaced the real log with the more
general complex log.  The earlier real log theorems by Steve Rodriguez
have been revised to use the new definition.  Steve's original theorems
can temporarily be found under the same name suffixed with "OLD", using
the token "logOLD" rather than "log".


(10-Mar-2008) The complex number axioms use a different naming
convention than their corresponding theorems, e.g. we have axaddrcl
rather than readdclt, sometimes causing confusion for people entering
proofs.  Therefore, I added aliases for their names using 1-step proofs,
as follows:

     Axiom       Alias

     axaddcl     addclt
     axaddrcl    readdclt
     axmulcl     mulclt
     axmulrcl    remulclt
     axaddcom    addcomt
     axmulcom    mulcomt
     axaddass    addasst
     axmulass    mulasst
     axdistr     adddit
     ax0id       addid1t
     ax1id       mulid1t
     axlttrn     lttrt
     axmulgt0    mulgt0t


(6-Mar-2008) pm3.26, pm3.27, and pm3.28 were erroneously given with
logical OR expanded into negation and implication.  pm3.26OLD, pm3.27OLD,
and pm3.28OLD, which will eventually be deleted, are the erroneous
versions of these.  This error also found its way into pmproofs.txt
http://us2.metamath.org:8888/mmsolitaire/pmproofs.txt which has also
been corrected.  Going through my backups, I found that this error dates
back to pre-Metamath in the early 90's when I converted my manually
typed list of 193 Principia Mathematica theorems into the condensed
detachment notation of pmproofs.txt.  Fortunately, this has no effect on
the pmproofs.txt proof itself.  I checked against the original typed
list, and only these 3 theorems had the mistake.


(11-Feb-2008) Theorems whose description begins with "Lemma for" have
their math symbols suppressed in the Statement (Theorem) List in order
to reduce the bulk of the list for faster web page loading.  Sometimes,
though, it is useful to have the lemma displayed.  As an informal
standard, I will change "Lemma for" to "- Lemma for" when we want the
lemma displayed.  The first one is fsumcllem, requested by Paul Chapman,
since it will be used for multiple purposes and may make sense to
someday call a "theorem".  If there are lemmas you would like to have
displayed, let me know.

(3-Feb-2008) topbas provides a simpler definition of a basis when we
know its topology in advance.  It is interesting that the very complex
expansion of "( B e. Bases /\ ( topGen ` B ) = J )" simplifies to
"A. x e. J E. y ( y (_ B /\ x = U. y )" when J is known.  Proving it was
trickier than I thought it would be, although the final proof is
relatively short.

I updated the Description of istopg to explain why the variable name "J"
is used for topologies.


(16-Jan-2008) ax-12 is the longest predicate calculus axiom, and an open
problem is whether it can be shortened or even proved from the others.
After 15 years of on-and-off work on this problem with no success,
today's a12study finally gives us a first hint, showing that it is
possible to represent ax-12 with two shorter formulas.  While the
shortening of the starting formulas is modest, and of course their
combined length is much longer than ax-12, the result is still
significant:  before, it wasn't clear whether ax-12 had some intrinsic
property preventing it from being "broken up" into smaller pieces.

It is curious that the hypotheses of a12study have similar forms.  I
don't know how they might be related.  Note that by detaching ax-9 from
the second one, they can also be written:

    a12study.1 $e |- ( -. A. z z = y ->
                             ( A. z ( z = x -> z = y ) -> x = y ) ) $.
    a12study.2 $e |- ( -. A. z -. z = y ->
                             ( A. z ( z = x -> -. z = y ) -> -. x = y ) $.


(12-Jan-2008) cnnvg is designed to match hypotheses of the form
"$e |- G = ( +v ` U )" such as in nvass.  When nvass is applied to the
vector space of complex numbers, cnnvba and cnnvg will change X to CC
and G to + with no other manipulations, immediately producing the
standard associative law for addition of complex numbers (after
"U e. CVec" is detached with cnnv).  This method will allow us to make
efficient use of complex number theorems, such as when working with
linear functionals that map to complex numbers.

cnnvdemo shows how this is done.  While U is substituted with
"<. <. + , x. >. , abs >." in cnnvdemo, we keep the U separate in
cnnv, cnnvg, and cnnvba to allow simplifying the display of proof steps
(and reducing the web page size) in lemmas for long proofs, to avoid
having to repeat "<.  <. + , x. >. , abs >." over and over.

Analogous cnnv* theorems will be added for other vector space functions.


(21-Dec-2007) cofunex2g has a somewhat longer proof than might be
expected because A and B are not required to be relations but may be any
classes whatsoever.  In particular, B may be any proper class.

The recent hlxxx theorems are meant to complete the list of "(future)"
theorems referenced in the comment of ax-hilex.  These theorems will
allow us to eliminate the Hilbert Space Explorer axioms in special cases
(i.e. for concrete Hilbert spaces like CC), in order to use the Hilbert
Space Explorer theorems as part of a ZFC-only theory.


(17-Nov-2007) df-pm (with value theorem pmvalg) introduces the notion of
partial functions.  Although partial functions are ubiquitous in the
theory of operators in functional analysis, there seems to be no symbol
in the literature for them.  The closest I've seen is an occasional
"F : dom F --> B" in place of of the total function "F : A --> B",
with dom F subset A implicit.  But to do operator theory in set.mm, not
having a formal notation for for partial functions would make the theory
of operators clumsy to work with.

There are two ways to do this.  One way would be to define an analog of
df-f:

     df-fp $a |- ( F : A -|-> B <-> ( Fun F /\ F (_ ( X X. Y ) ) $.

or equivalently (by funssxp)

     df-fp $a |- ( F : A -|-> B <-> ( F : dom F --> B /\ dom F (_ A ) ) $.

Here, the standard mapping arrow with a vertical bar in the middle is
used by the Z language to denote a partial function, and it is
the only published symbol for it I've seen, although the Z language
isn't really "textbook mathematics."  I like this symbol because of its
similarity to the familiar "F : A --> B" of df-f, and I was very
tempted to use it.  The drawback is that it defines a new syntactical
structure, not just a new symbol, so we would need a whole
mini-development of equality theorems, bound variable hypothesis
builders, etc. as we do with df-f.

Such a new structure is unavoidable when the arguments could be proper
classes, as in the case of many uses of df-f.  But in the case of the
intended uses of partial functions, the domain and range will always be
sets (at least I've never seen a requirement for them in set theory
where proper classes sometimes arise).  This means that we can define a
constant symbol for an operation similar to df-map, making all of the
theorems relating to operations immediately available.

With that in mind, I chose "^pm" as a generalization of "^m" of df-map.
I am not thrilled with it because it doesn't seem intuitive or
suggestive of its meaning, but I didn't have any better ideas.  I am
open to suggestions for a better symbol to use in place of "^pm", and in
the meantime I'll continue to use "^pm" for lack of a better
alternative.


(15-Nov-2007) Baire's Category Theorem bcth was unexpectedly hard to
prove.  A big problem is that initially I didn't know that acdc5
(Dependent Choice) would be required to prove the existence of g.  The
textbook proof simply says we conclude the existence of g "by
induction," which certainly stretches the meaning of that word.


(2-Nov-2007) 0.999... is now proved, so the volunteer request of
30-Sep-2007 below is no longer applicable, although I appreciate
the attempts of individuals such as rpenner on the physorg.com forum.

The proof was more involved than I thought it would be, requiring new
theorems serzmulc1, isummulc1a, and geoisum1.

For the proof of 0.999... itself, quantifiers were avoided except for
the implicitly quantified summation variable k.  Hopefully this will
make it possible for more non-mathematicians to follow the proof.


(22-Oct-2007) Note that pm3.26bd, pm3.27bd were renamed pm3.26bi,
pm3.27bi.


(12-Oct-2007) Some of the kmlem* proofs were shortened by restating
the lemmas and using yesterday's eldifsn.


(30-Sep-2007) 0.999...=1 has been debated for many years on Usenet and
elsewhere on the Internet.

Example:  http://forum.physorg.com/index.php?showtopic=13177 from March
2007 with 267(!) pages of discussion still on-going as of today.
Includes a poll where 41% of people disbelieve 0.999...=1.  There is
even a brief reference to Metamath somewhere in the mess.

Example:
http://groups.google.com/group/sci.math/browse_frm/thread/3186915e0766f1ca
from May 2007, whose last post was September 26.

Does someone wish to volunteer to prove

  $( The repeating decimal 0.999... equals 1. $)
  0.999... $p |- sum_k e. NN ( 9 / ( 10 ^ k ) ) = 1 $= ? $.

to put an end to it once and for all?  (Wishful thinking of course.)  At
least you'll make a name for yourself. :)  Theorem geoisum may be useful
for the proof.


(28-Sep-2007) The symbol for floor was changed from "floor" to
"|_" (L-shaped left bracket) at the suggestion of Paul Chapman.


(21-Sep-2007) iccf was moved out of FL's sandbox to make it "official".
It was also renamed from the earlier "icof".

Compared to the old bl2iooOLD, the bl2ioo proof is shorter because it
incorporates Paul Chapman's recent absdifltt.


(17-Sep-2007) Perhaps a reader will volunteer to create Metamath proofs
for one or more of the following.  I hope I have stated them correctly.
They should be fun puzzles, and in the unlikely event that two people
submit the same one, the shortest proof will win.  :)  The tricks
provided by these theorems may simplify the use of theorem cnco and
relatives, because they have no dummy variables to deal with, unlike
class builder representations.  If no one responds, I'll prove them
myself eventually when I have time.

For fpar, note that each operand of i^i is not a function by itself -
the intersection cuts them down so that the final set of ordered pairs
is single-valued.  This should make it interesting to prove. :)

I think the other two are relatively straightforward, involving mainly
expansions of the definitions.  It may be possible to use a special case
of fpar for the proof of opr2f, using fconstg, but I'm not sure it would
help.  In order of increasing difficulty, I would guess fsplit, opr2f,
and fpar.  If anyone finds a simpler expression for the left-hand side
of the equality, let me know.

So, paste the below at the end of your set.mm and fire up mmj2...

(Note added 2/4/09:  fpar has been added.)
  ${
    $d x y z A $. $( etc. $)
    $( Merge two functions in parallel.  Use as the second argument of a
       composition with a (2-place) operation to build compound operations
       such as ` z = ( ( sqr ` x ) + ( abs ` y ) ) ` . $)
    fpar $p |- ( ( F Fn A /\ G Fn B ) ->
         ( ( `' 1st o. ( F o. 1st ) ) i^i ( `' 2nd o. ( G o. 2nd ) ) ) =
         { <. <. x , y >. , z >. |
            ( ( x e. A /\ y e. B ) /\ z = <. ( F ` x ) , ( G ` y ) >. ) } ) $=
?$.
  $}

(Note added 2/4/09:  I will be completing fsplit soon.)
  ${
    $d x y $.
    $( A function that can be used to feed a common value to both operands
       of an operation.  Use as the second argument of a composition with
       the function of ~ fpar in order to build compound functions such
       as ` y = ( ( sqr ` x ) + ( abs ` x ) ) ` . $)
    fsplit $p |- `' ( 1st |` I ) = { <. x , y >. | y = <. x , x >. } $=
?$.
  $}

(Note added 12/16/08:  opr2f is no longer needed; this will become curry2.)
  ${
    $d x y A $. $( etc. $)
    $( Turn an operation with a constant second operand into a function of the
       first operand only, such as ` y = ( x + 5 ) ` . $)
    opr2f $p |- ( ( F Fn ( A X. B ) /\ C e. B ) ->
        ( F o. `' ( 1st |` ( V X. { C } ) ) ) =
         { <. x , y >. | ( x e. A /\ y = ( x F C ) ) } ) $=
?$.
  $}



(13-Sep-2007) The astute reader will notice that df-ims was changed to a
more compact version (compare df-imsOLD).  imsval3 replaces imsvalOLD for
use in reproving the related *OLD theorems, although imsval3 may be
phased out with shorter direct proofs from the new imsval.

A clever technique was used in Paul Chapman's reret (of 8-Sep-2007) to
eliminate a hypothesis by using the if() function directly, without
invoking dedth.


(8-Sep-2007) hlcom is part of an eventual derivation of the Hilbert
Space Explorer axioms using ZFC only.  A small change in the Hilbert
Space Explorer axiomatization will then allow us to convert all theorems
to pure ZFC theorems, with no changes to the theorems themselves,
whenever we are dealing with a fixed Hilbert space (such as complex
numbers).  This axiomatization change is described in the comment of
ax-hilex http://us2.metamath.org:8888/mpegif/ax-hilex.html .

I probably will not actually make this change in axiomatization but will
only describe it.  It is very simple to do for anyone interested.  I
still think it is useful to have the axioms separated out - it makes the
Hilbert Space Explorer Home Page easier to describe and it also allows
us to see what axioms are used to prove specific theorems.

The Hilbert Space Explorer Home Page
http://us2.metamath.org:8888/mpegif/mmhil.html was updated to mention
this alternate approach (the first 3 paragraphs of "The Axioms"
section).


(7-Sep-2007) The new cnmet (with Met) that will replace cnms (with
MetSp) also replaces the distance function "{ <. <. x , y >. , z >. | (
( x e. CC /\ y e. CC ) /\ z = ( abs ` ( x - y ) ) ) }" with
"( abs o. - )", which I think is nicer.  A more compact version of cnmet
could read simply "( abs o. - ) e. Met", but the D is separated out to
integrate more smoothly with other theorems.  It also makes the proof a
little easier to read.

By the way the "Base" extractor (df-ba) for normed metric spaces is
capitalized because, once it is fixed for a particular vector space U,
it is not a function, unlike e.g. the "norm" extractor (df-nm).  This is
usually our convention when there is no literature standard.  Another
example is the set closed subsets "Clsd" (df-clsd) vs. the closure "cls"
(df-cls).


(4-Sep-2007) The following major changes have been made to set.mm.

1. The token Met (metric space) has been changed to MetSp.  A new token
called Met is defined as the class of all metrics (df-met), and a metric
space (df-ms) is defined as the pair of a base set and metric.  To
extract the base set X from a metric D, we will usually use "dom dom D".

Note that this is consistent with what we now do for topologies (df-top
and df-topsp), with "U. J" for the base set of topology J.  It is also
consistent with groups, which are defined using only the group operation.

The advantages of the new convention is that proofs will be often be
shorter, and theorems will be shorter to state, e.g.

  OLD:
  msf.1 $e |- X = ( 1st ` M ) $.
  msf.2 $e |- D = ( 2nd ` M ) $.
  mscl $p |- ( ( M e. MetSp /\ A e. X /\ B e. X ) -> ( A D B ) e. RR ) $=

  NEW:
  metf.1 $e |- X = dom dom D $.
  metcl $p |- ( ( D e. Met /\ A e. X /\ B e. X ) -> ( A D B ) e. RR ) $=

2. Eventually, the theorems involving the old MetSp will be phased out
and replaced with equivalent theorems involving the new Met.  Note that
in topology, the TopSp definition has had little real value since
everything can be done more easily with Top, and the same should be
true with metric spaces.

3. The definitions making use of the old MetSp have been replaced with
ones using Met. The old definitions have been renamed *OLD, e.g. df-bl
vs. df-blOLD.  You can see the changed ones with 'show statement
df-*OLD'.

4. All theorems making use of a df-*OLD will eventually have their
labels suffixed with OLD, in the next few days.  Some of this has
already happened.  They will eventually be replaced with non-OLD
versions.

5. Based on a suggestion of Frederic Line (see the 16-Apr-2007 comment
in http://planetx.cc.vt.edu/AsteroidMeta/set.mm_discussion_replacement ),
the cryptic "( 1st ` ( 2nd ` U ) )" etc. will go away in normed
vector spaces (including pre-Hilbert spaces, Banach spaces, and Hilbert
spaces).  Instead, we will phase in the use of the named components
df-va, df-sm, df-nm and df-ba to make the theorems more readable as well
as shorter to state.  In addition, the theorems will become independent
of the details of the ordered pairs in the vector space definition.
E.g. nvge0 will be changed from

  ${
    nvge0OLD.1 $e |- W = ( 1st ` U ) $.
    nvge0OLD.2 $e |- G = ( 1st ` W ) $.
    nvge0OLD.3 $e |- N = ( 2nd ` U ) $.
    nvge0OLD.4 $e |- X = ran G $.
    $( The norm of a normed complex vector space is nonnegative. $)
    nvge0OLD $p |- ( ( U e. NrmCVec /\ A e. X ) -> 0 <_ ( N ` A ) ) $=...
  $}

to the new

  ${
    nvge0.1 $e |- X = ( Base ` U ) $.
    nvge0.2 $e |- N = ( norm ` U ) $.
    $( The norm of a normed complex vector space is nonnegative. $)
    nvge0 $p |- ( ( U e. NrmCVec /\ A e. X ) -> 0 <_ ( N ` A ) ) $=...
  $}


Again, the original versions will be renamed to *OLD.  Some of them
already have, and this renaming should be completed in a few days.

(In the future, I may extended this use of named components to metric
spaces, etc.  For now I am limiting it to normed vector spaces, which in
a way is a "final" application of topologies, metric spaces, groups, and
non-normed vector spaces.)

Over the next few days, the labels in the current set.mm will unstable,
with frequent changes, starting at df-ms, and individual label changes
there will not be documented in the "Recent label changes" at the top of
set.mm.  The labels _before_ df-ms are stable, and any changes will be
documented in "Recent label changes" as usual.  If you are working with
set.mm, it will be safe (and preferred) to use the latest version
provided you are using things above df-ms.

The last version of set.mm before these changes are available in
us.metamath.org/downloads/metamath.zip, for a week or so.


(3-Sep-2007) Tomorrow there will be a major change in the notational
conventions for metric and vector spaces.  Today's version of set.mm is
the last version prior to this change.  If you are working from the
current set.mm, you may want to archive today's version for reference,
to compare against the new version if needed.


(22-Aug-2007) Interestingly, hbxfr shortens 40 proofs and "pays" for
itself several times over in terms of set.mm size reduction.


(2-Aug-2007) I wouldn't have guessed a priori that proving addition is
continuous (plcn) would be so tedious.  Part of the problem might be
that we have defined continuity in the very general context of
topologies, but in the long run this should pay off.  I didn't use the
epsilon-delta method, but instead obtained a slightly shorter proof (I
think) by using the already available climadd together with cnmet4.
This is exactly the method used by Gleason, although his one sentence to
that effect expands to a very long proof.


(10-Jun-2007) The symbol "Cls" was changed to "Clsd".  See the
discussion at http://planetx.cc.vt.edu/AsteroidMeta/closed_and_closure


(24-May-2007) axnegex and axrecex are now no longer used by any proof,
and were renamed to axnegexOLD and axrecexOLD for eventual deletion.
The axiom list at
http://us2.metamath.org:8888/mpegif/mmcomplex.html#axioms was updated.

A note on theorem names like msqgt0:  a theorem name such as "msqgt0"
with "msq" (m=multiplication) means "A x. A", while a name such as
"sqgt0" with just "sq" means "A ^ 2".  Since we are working directly
with the axioms, we use A x. A rather than A ^ 2 because exponentiation
is developed much later.


(23-May-2007) Eric Schmidt has solved the long-standing open problem
(first posted to Usenet on Apr. 25, 1997) of whether any of the ax*ex
axioms for complex numbers are redundant.  Here are his proofs:

For axnegex:

  One thing to notice is that both 0re and 1re depend on axnegex for their
  proofs, potentially a problem if we will need to invoke these
  statements.  However, the proof of 0re only incidentally uses axnegex,
  mainly because it relies on 1re.  Instead, we note that the existence of
  any complex number implies by axcnre the existence of a real number,
  from which 0 in R follows from the (by now) usual inverse argument.  [So
  (R, +) is a group.]

  To prove axnegex, given a complex number a + bi, we would like to find
  the additive inverse as (-a) + (-b)i.  However, proving that this is an
  additive inverse requires us to know that 0i = 0, which itself depends
  on axnegex.  We can get by with a weaker statement, namely that xi is
  real for some real x. For there exist x, y in R such that 0 = y + xi, or
  xi = -y.

  Having such an x, we know there exists c in R such that b + c = x. Then
  a + bi + ci is real, and hence has an additive inverse d. Then ci + d is
  an additive inverse of a + bi, which proves axnegex.

  We can then prove 1 in R using the current Metamath proof, in case we
  will need it.

For axrecex:

  For axrecex, (a + bi) * (a - bi)/(a^2 + b^2) = 1 ought now to be
  provable without any hoops to jump through.  The two main points are
  proving (a + bi) * (a - bi) = a^2 + b^2 and that a^2 + b^2 != 0 if
  a + bi != 0 (from which, using the now provable 0i = 0, we readily
  obtain a != 0 or b != 0).

I formalized his axnegex proof, which was posted yesterday as negext.
The axrecex proof will need a reorganization of set.mm so that some of
the ordering theorems come before the reciprocal/division theorems, so
it may take a couple of days to formalize.  These kinds of proofs tend
to be somewhat long, because we can't make use of future theorems that
depend on the axioms we are trying to prove.  Eventually axnegex and
axrecex will be eliminated from the official set of complex number
axioms at http://us2.metamath.org:8888/mpegif/mmcomplex.html, reducing
the number of axioms from 27 to 25.


(19-May-2007) As you can see from its "referenced by" list, 3expia ends
up shortening 40 proofs, which was a suprise to me, and shrinks the size
of set.mm accordingly.


(18-May-2007) Paul Chapman's relatively sophisticated bcxmas was done
entirely with mmj2.  He writes, "using mmj2, I don't have to remember
the names of theorems.  What I do with steps like this is try something
and see if mmj2 finds a theorem which fits.  When I don't, I usually add
another step (or very occasionally try a different sequence).  For more
complex steps I tend to search set.mm for text fragments I expect to
find in theorems which I think might fit the problem, eg 'A + B ) e.'."


(17-May-2007) ssimaexg and subtop were taken from FL's "sandbox" and
made official, with slightly shorter proofs.  The originals were renamed
ssimaexbOLD, topsublem1OLD, topsublem2OLD, and topsubOLD, and will be
deleted eventually.


(30-Apr-2007) The definitions of +v, etc. of 26-Apr-2007 have been
retired and replaced with new ones.  See df-va and the statements
following it.


(26-Apr-2007) It seems the new symbols +v, etc., described in the
23-Apr-2007 note below, are not a good idea after all.  It quadruples
the proof size of ncvgcl (compared to ncvgclOLD), and in general is
going to lead to longer proofs, especially for theorems brought over
from more general theories (like ncvgcl is, from vcgcl).  I have several
other ideas I'm considering but need to think them over carefully.  In
the meantime, I'll probably continue to develop new theorems with the
"W = ( 1st ` U )" etc. hypotheses, for retrofitting later.


(23-Apr-2007) The symbols +v, .s, 0v, -v, norm, and .i were taken from
the Hilbert Space Explorer for use by new definitions df-va, df-sm,
df-0v, df-vs, df-nm, and df-ip.  This will allow us to use the less
cryptic "( +v ` U )" for vector addition in a normed complex vector
space U (and Banach and Hilbert spaces), instead of the old
"( 1st ` ( 1st ` U ) )".  This was brought up by fl and discussed in the
16-Apr-2007 entries at
http://planetx.cc.vt.edu/AsteroidMeta/set.mm_discussion_replacement .
The new definitions will also provide more "generic" theorems in case we
decide to change the ordered pair structure of df-ncv, etc.

The new definitions df-va and df-ba serve the purpose of fl's proposed
df-ahf and df-hilf in http://planetx.cc.vt.edu/AsteroidMeta/fl's_sandbox .

The symbols in the Hilbert Space Explorer have been replaced with
+h, .h, 0h, -h, .ih, and normh.


(18-Apr-2007) The old Hilbert Space Explorer axioms ax-hvaddcl and
ax-hvmulcl will be replaced by ax-hfvadd and ax-hfvmul so that the
operations can be used with our group, vector space, and metric space
theorems.


(12-Apr-2007) Eric Schmidt discovered that the old ax1re, 1 e. RR,
can be weakened to ax1cn, 1 e. CC.  I updated the mmcomplex.html
page accordingly.


(27-Mar-2007) Maybe this is REALLY REALLY the end of shortening
grothprim.  At least we broke through the 200 symbol barrier.

axgroth3 was used to shorten the previous grothprim.  Unfortunately,
that one (grothprim-8OLD) is now obsolete, so I'll probably delete
axgroth3.


(23-Mar-2007) grothprim was shortened a little more by exploiting the
Axiom of Choice (via fodom and fodomb).  As for shortening grothprim
further, this may REALLY be the end of what I am capable of doing.


(21-Mar-2007) Paul Chapman revised the proof of 0nn0 (compare 0nn0OLD)
to use olci, which he feels is more natural than the old one's use of
olc, "which seems to make a complicated wff out of a simple one."


(20-Mar-2007) Unlike df-f, dff2 avoids direct or indirect references to
df-id, df-rel, df-dm, df-rn, df-co, df-cnv, df-fun, and df-fn (all of
which are used when df-f is expanded to primitives) but is still almost
as short as df-f.  I was surprised at how long and difficult the proof
was, given the vast number of theorems about functions that we already
have.  Perhaps a shorter proof is possible that I'm not seeing.


(17-Mar-2007) df-hl was changed to an equivalent one that is slightly
easier to use.  Compare the old one, df-hlOLD.


(15-Mar-2007) dfid2 is the only theorem that makes use of the fact that
x and y don't have to be distinct in df-opab.  I doubt that dfid2 will
be used for anything, but I thought it was interesting to demonstrate
this.


(12-Mar-2007) This may be it for grothprim for a while.  I have stared
at this thing for a long time and can't see any way to shorten it
further.  If anyone has any ideas let me know.


(7-Mar-2007) impbid1 and impbid2 occupy 570 bytes in set.mm but reduce
other proofs by 1557 bytes, with a 987 byte net size reduction of
set.mm.


(5-Mar-2007) In spite of its apparent simplicity, abexex is quite
powerful and makes essential use of the Axiom of Replacement (and is
probably equivalent to it, not sure).  Chaining abexex can let us prove
the existence of such things as { x | E. y E. z E. w...} that arise from
non-trivial class builders (e.g. other than just the subsets of cross
products) corresponding to ordered pair abstraction classes, etc. and
which can be quite difficult to prove directly.


(4-Mar-2007) I found shorter proofs for elnei, neips, ssnei2, innei, and
neissex.  The previous proofs are in elneiOLD, neipsOLD, ssnei2OLD,
inneiOLD, and neissexOLD (which will be deleted in a few days).


(1-Mar-2007) The contributions by Frederic Line are new versions
provided by him, using the new definition df-nei (see the notes of
15-Feb-2007 below).  Compare the *OLD ones starting at df-neiOLD.  Most
have been renamed, as well, and description for each *OLD version gives
the corresponding new name.


(20-Feb-2007) I have incorporated new sections at the end of the set
theory part of set.mm (before the Hilbert space part), called
"sandboxes," that will hold user contributions that are too specialized
for the "official" set.mm or that I haven't yet reviewed for official
inclusion.  Here are the notes in set.mm about these sections.  And, to
prevent any future misunderstandings, some dire warnings.  :)

  "Sandboxes" are user-contributed sections that are not officially part
  of set.mm.  They are included in the set.mm file in order to ensure that
  they are kept synchronized with label, definition, and theorem changes
  in set.mm.  Eventually they may be broken out as separate modules,
  particularly in conjunction with future Ghilbert translations.

  Notes:

  1. I (N. Megill) have not necessarily reviewed definitions for soundness
     or agreement with the literature.
  2. Over time I may decide to make certain definitions and theorems
     "official," in which case they will be moved to the appropriate section
     of set.mm and author acknowledgments added to their descriptions.
  3. I may rename statement labels and constants at any time.
  4. I may revise definitions, theorems, proofs, and statement descriptions at
     any time.
  5. I may add or delete theorems and/or definitions at any time.
  6. I may decide to delete part or all of a sandbox at any time, if I feel
     it will not ultimately be useful or for any other reason.

  If you want to preserve your original contribution, keep your own copy
  of it along with the version of set.mm that works with it.  Do not depend
  on set.mm as its permanent archive.

  Syntax guideline:  if at all possible, please use only 0-ary class constants
  for new definitions, to make soundness checking easier.

  By making a contribution, you agree to release it into the public domain,
  according to the statement at the beginning of this file.

Today I added sandboxes for Fred Line and Steve Rodriguez.  The contents
of their sandboxes appear in the Theorem List, at the end of the "Metamath
Proof Explorer" part.


(15-Feb-2007) The old definition of neighborhood was somewhat awkward to
work in some situations.  In particular, "the set of all neighborhoods
of a point," which occurs when working with limit points, needed a class
abstraction.  So I have revised the definition of neighborhood to be a
function that maps each subset to all of its neighborhoods, rather than
a binary relation.  This also fits more consistently with some other
definitions, I think.

The neighborhood theorems will be revised so that

  N e. ( ( nei ` J ) ` S )

is used instead of

  N ( nei ` J ) S

to mean "N is a neighborhood of subset S".  Even though this seems
longer, I believe it will make certain future theorems more natural and
even have shorter proofs in some cases.  For example, "the set of all
neighborhoods of S" just becomes

 ( ( nei ` J ) ` S )

instead of

 { x | x ( nei ` J ) S }

so that working with a dummy variable becomes unnecessary.  (We could
also use

  ( ( `' ( nei ` J ) ) " { S } )

to avoid a dummy variable with the old definition, but I don't think
many people would enjoy deciphering that!)

The old neighborhood is called "neiOLD", with its theorems renamed to
*OLD, as in df-neiOLD, etc.  These will be deleted once the conversion
is complete.


(5-Feb-2007) df-10 was added to the database, and the comments under
df-2 were revised.  Since we don't have an explicit decimal
representation of numbers, df-10 will allow more reasonable
representations as powers of 10 than just having the digits defined.
E.g.  (omitting parentheses):

    old: 456 = 4*(9+1)^2 + 5*(9+1) + 6
    new: 456 = 4*10^2 + 5*10 + 6

Previously, I avoided defining 10 since a presumed future decimal
representation might have juxtaposed 1 and 0. But such a representation
seems far off and low priority at this time, so an explicit definition
of 10 will be helpful in the interim.

A sample theorem 7p3e10 was added to "test" the new definition;
additional simple theorems for the number 10 will be added shortly.


(5-Feb-2007) (cont.)  A new version of ax-11 was added.  The original
ax-11 was renamed ax-11o, and all uses of it were replaced with
references to the new theorem ax11o (proved from the new ax-11).  A new
axiomatization was placed on the mmset.html page, and a new table was
added that summarizes what is known about various possible subsystems.

Theorem ax11a (mentioned yesterday and earlier) was renamed ax11.


(2-Feb-2007) ax11a2 proves that ax11a can replace ax-11.  I have been
wondering off and on for over 10 years whether this is the case, so I am
pleased to see it proved.  This answers the open question of 22-Jan-2007
below:  "I don't know if ax-11 can be recovered from it (that would be
nice)..."  This now means we can replace ax-11 with the shorter
equivalent

      ( x = y -> ( A. y ph -> A. x ( x = y -> ph ) ) )

which I am taking under consideration.  However, it would be nicer if
ax-11 could be proved from ax11a without relying on ax-16 and ax-17, so
that the "predicate calculus without distinct variables" portion ax-1
through ax-15 (+ ax-mp + ax-gen) would have the same metalogical power
of proof.

Even if we can't prove ax-11 from ax11a without ax-16 and ax-17, the
axiom set ax-1 through ax-15 would still be logically complete in the
sense described at
http://us2.metamath.org:8888/mpegif/mmzfcnd.html#distinctors .  The
deficiency would be that more theorems would have dummy variables in
their distinctor antecedents, in particular the old ax-11 proved as a
theorem.  However, in a way this is only of cosmetic importance, since
no matter how many axioms without distinct variables we have, Andreka's
theorem tells us there will always be some theorems with dummy variables
in their antecedents.

Now, if we could just simplify the long and ugly ax-12...  I have
attempted that off and on also, trying to find a shorter axiom that
captures its "essence" in the presence of the others, but without
success.  (I don't care that much about ax-15, since it is redundant in
the presence of ax-17, as theorem ax15 shows.)  The basic statement it
makes is an atomic case of ax-17 using distinctors, just like ax-15, and
that basic statement should be provable in the same way as theorem ax15
if we have the right support theorems.  The problem so far is that those
support theorems seem to need ax-12 in a different role.

ax11a2 also shows that if we wish we can "weaken" ax11a2 by making
$d x y and $d x ph if we wish, and still have completeness.  Some people
might prefer this as part of an alternate axiomatization that tries to
reduce double binding in the axioms by having all set variables
distinct.


(1-Feb-2007) Interestingly, 3anidm23 will shorten 13 proofs, and
adding will result in a net decrease in the size of set.mm.


(31-Jan-2007) To prove that ipval has the inner product property
( C x. ( A ( ip ` U ) B ) ) = ( ( C S A ) ( ip ` U ) B ), i.e.
C.<A,B> = <C.A,B> in standard notation, for all complex C (in the
presence of the parallelogram law) is nonelementary:  it involves an
induction to show it holds for C e. NN, then we extend it to QQ, then to
RR using continuity and the fact that QQ is dense in RR (qbtwnre), then
to CC.  I think this was proved by Jordan and von Neumann in 1935.  The
difficulty of the proof may be why most (all?) books define a Hilbert
space as not just a special normed space but as having a "new" operation
of inner product, from which a norm is derived.

I had some misgivings because of the difficulty of the proof, but I
think it will pay off:  our definition has the nice property that
CHil (_ CBan (_ CNrmVec which the standard definition doesn't.  This
will allow these spaces to share theorems trivially, which isn't the
case with the "standard" textbook definition.  (Analogous to this is our
NN (_ ZZ (_ QQ (_ RR (_ CC.  The standard textbook definition of CC as
ordered pairs from RR doesn't have this property formally.)

We will needed some additional theory about continuous functions for the
proof, but that should be useful for other things as well.  Anyway, it
will be some time before all the inner product properties are proved.

I may add pre-Hilbert spaces CPreHil, which is CNrmVec in which the
parallelogram law holds.  Then we would also have
CHil (_ CPreHil (_ CNrmVec.  However, CBan and CPreHil are not
comparable as subclasses (one is complete; the other has the
parallelogram law).  CHil would have the trivial definition
CHil = ( CBan i^i CPreHil ).


(24-Jan-2007) I finally was able to prove a single theorem ax11inda that
covers all cases of the quantification induction step simultaneously.
It has no restrictions on z and needs no "tricks" to use it (so there
no associated uncertainty that some special case hasn't been
overlooked).  This makes all the other versions obsolete, which have
been renamed to *OLD.  Part of the problem before is that I didn't even
know what it should end up looking like, much less how to prove it.
While the previous two evenings of effort were thus wasted, perhaps
subconsciously they helped lead me towards this final solution.

Although it is unnecessary now, I reproved yesterday's ax11demo (whose
old proof is called ax11demoOLD) to show how simple its proof becomes
with the new ax11inda.

This completes, therefore, all the basis and induction steps needed to
derive any wff-variable-free instance of ax-11 without relying on ax-11,
thus showing that ax-11 is not logically independent of the other axioms
(even though it is metalogically independent).


(23-Jan-2007) I was unhappy with yesterday's ax11inda (now ax11indaOLD)
because it was deceptively difficult to use for actual examples I tried,
and it wasn't clear to me that it could handle all possible cases
theoretically (e.g. it wasn't clear that I could derive today's ax11demo
with it).  The new ax11inda is simple to use, but it only works when z
and y are distinct.  I added the more powerful ax11inda2 that can be used
otherwise.  I think ax11inda2 can cover all possible cases, although I'm
still working on a convincing argument for that.

ax11inda2 is still not as easy to use - the variable renaming to
eliminate the 2nd hypothesis can be very tricky.  I added ax11demo to
show how to use it.

ax11inda3 is really a lemma for ax11inda, but I thought it was
interesting in its own right because it has no distinct variable
restrictions at all, and I made it a separate theorem for now.  I might
rename it to ax11indalem, though.


(22-Jan-2007) The following email excerpt describes the new theorems
related to ax-11.

  Hi Raph,

  > 4. How important is ax-11?
  >
  > Clearly, all theorems of PA can be proved using your axioms, but it's
  > quite possible that ax-11 makes the statement of certain theorems more
  > general in a useful way, and thus the resulting proof files would be
  > shorter and clearer. I'm particularly interested in the quantitative
  > question: how _much_ shorter? This is more a question for Norm than
  > for Rob, but in any case it's entirely plausible that the only real
  > way to answer it would be to try to prove a corpus of nontrivial
  > theorems both ways.

  I don't know if I have the answer you seek, but I'll recap what I know
  about ax-11:

    ( -. A. x x = y -> ( x = y -> ( ph -> A. x ( x = y -> ph ) ) ) )
    http://us2.metamath.org:8888/mpegif/ax-11.html

  (You may already know some of this.)  Before Juha proved its
  _metalogical_ independence, I spent some time in the other direction,
  trying to prove it from the others.  My main result was proving, without
  ax-11, the "distinct variable elimination theorem" dvelimf2 (which
  pleased me at the time):

    http://us2.metamath.org:8888/mpegif/dvelimf2.html

  that provides a method for converting "$d x y" to the antecedent
  "-. A. x x = y ->" in some cases.  This theorem can be used to derive,
  without ax-11, certain instances of ax-11.  Theorem ax11el shows an
  example of the use of dvelimf2 for this.

  In the remark under ax-11, I say:

    Interestingly, if the wff expression substituted for ph contains no wff
    variables, the resulting statement can be proved without invoking this
    axiom.  This means that even though this axiom is metalogically
    independent from the others, it is not logically independent.  See
    ax11el for a simple example of how this can be done.  The general case
    can be shown by induction on formula length.

  Yesterday I added the theorems needed to make this remark rigorous.  For
  the basis, we have for atomic formulas with equality and membership
  predicates:

    http://us2.metamath.org:8888/mpegif/ax11eq.html
    http://us2.metamath.org:8888/mpegif/ax11el.html

  (These were tedious to prove.  ax11el is the general case that replaces
  older, more restricted demo example also called ax11el, now obsolete and
  temporarily renamed ax11elOLD.)  As a bonus, we also have the
  special-case basis for any wff in which x is not free:

    http://us2.metamath.org:8888/mpegif/ax11f.html

  For the induction steps, we have for negation, implication, and
  quantification

    http://us2.metamath.org:8888/mpegif/ax11indn.html
    http://us2.metamath.org:8888/mpegif/ax11indi.html
    http://us2.metamath.org:8888/mpegif/ax11inda.html

  respectively.  I wanted the last one to be prettier (without the implied
  substitution and dummy variable) but wasn't successful in proving it
  that way; nonetheless it is hopefully apparent how it would be used for
  the induction.

  The "distinctor" antecedent of ax-11 can be eliminated if we
  assume x and y are distinct:

    ( x = y -> ( ph -> A. x ( x = y -> ph ) ) )  where $d x y
    http://us2.metamath.org:8888/mpegif/ax11v.html

  I didn't try to recover ax-11 from this, but my guess is that we can.

  We can also eliminate the "distinctor" antecedent like this:

    ( x = y -> ( A. y ph -> A. x ( x = y -> ph ) ) )
    http://us2.metamath.org:8888/mpegif/ax11a.html

  which has no distinct variable restriction.  This is a curious
  theorem; I don't know if ax-11 can be recovered from it (that would
  be nice) or if it can be proved without relying on ax-11.

  Norm


(20-Jan-2007) enrefg, sbthlem10, and sbth have been re-proved so that
the Axiom of Replacement is no longer needed.


(18-Jan-2007) The replacements for the clim* and climcvg* families are
complete.  In a few days, the old theorems will be made obsolete,
with their replacements indicated in the following list, which will
be added to the "Recent Label Changes" section of set.mm.

Date      Old       New         Notes

18-Jan-07 climcvgc1 ---         obsolete; use clmi1
18-Jan-07 climcvg1  ---         obsolete; use clmi2
18-Jan-07 clim1     ---         obsolete; use clm2
18-Jan-07 clim1a    ---         obsolete; use clm3
18-Jan-07 clim2a    ---         obsolete; use clm2
18-Jan-07 clim2     ---         obsolete; use clm4
18-Jan-07 climcvg2  ---         obsolete; use clmi2
18-Jan-07 climcvg2z ---         obsolete; use clmi2
18-Jan-07 climcvgc2z ---        obsolete; use clmi1
18-Jan-07 climcvg2zb ---        obsolete; use clmi2
18-Jan-07 clim2az   ---         obsolete; use clm3
18-Jan-07 clim3az   ---         obsolete; use clm3
18-Jan-07 clim3a    ---         obsolete; use clm3
18-Jan-07 clim3     ---         obsolete; use clm4
18-Jan-07 clim3b    ---         obsolete; use clm2
18-Jan-07 climcvg3  ---         obsolete; use clmi2
18-Jan-07 climcvg3z ---         obsolete; use clmi2
18-Jan-07 clim4a    ---         obsolete; use clm3
18-Jan-07 clim4     ---         obsolete; use clm4
18-Jan-07 climcvg4  ---         obsolete; use clmi2
18-Jan-07 climcvgc4z ---        obsolete; use clmi1
18-Jan-07 climcvg4z ---         obsolete; use clmi2
18-Jan-07 clim0cvg4z ---        obsolete; use clm0i
18-Jan-07 climcvgc5z ---        obsolete; use clmi1
18-Jan-07 climcvg5z ---         obsolete; use clmi2
18-Jan-07 clim0cvg5z ---        obsolete; use clm0i
18-Jan-07 climnn0   ---         obsolete; use clm4
18-Jan-07 climnn    ---         obsolete; use clm4
18-Jan-07 clim0nn   ---         obsolete; use clm0
18-Jan-07 climcvgnn ---         obsolete; use clmi2
18-Jan-07 climcvgnn0 ---        obsolete; use clmi2
18-Jan-07 clim0cvgnn0 ---       obsolete; use clm0i
18-Jan-07 climcvg2nn0 ---       obsolete; use clmi2
18-Jan-07 clim0cvg2nn0 ---      obsolete; use clm0i
18-Jan-07 climnn0le ---         obsolete; use clm4le
18-Jan-07 clim0nn0le ---        obsolete; use clm4le and clm0


(14-Jan-2007) The purpose of resiexg is to allow us to re-prove
(eventually) the Schroeder-Berstein theorem sbth without invoking the
Axiom of Replacement.


(11-Jan-2007) Right now there is a confusing mess of about 3 dozen
theorems in the clim* and climcvg* families.  It appears that these can
all be replaced by around 7 theorems that cover all possible cases, and
clm1 is the first in this new family.  These should allow us to get rid
of the old ones, which will probably happen soon.


(8-Dec-2006) In the comment of 17-Nov-2006 below, I mentioned
"ra4sbcgfOLD used some clever tricks to convert the hypothesis of
ra4sbcfOLD to an antecedent."  Since ra4sbcgfOLD will soon be deleted, I
extracted the "trick" into a neat stand-alone theorem, dedhb.  I
shortened the proof of ra4sbcfOLD with it to show how dedhb is used.


(6-Dec-2006) I put a detailed comment about the hypotheses in imsmslem
because it uses them all in one place.  I am making note of it here for
future reference.  I've been roughly trying to keep the variable names
consistent.  There are a few changes from one theory to the next, e.g.
the group theory unit U is changed to Z (zero) in normed vector space
because it seems more natural.

Even though all these hypotheses are getting cumbersome to drag around,
that is what happens when the implicit assumptions of analysis books are
made explicit.  Fortunately, many of them tend to disappear in final
applications, such as imsms or ccims.

While it would be theoretically nicer to allow general division rings
for the scalar product of vector spaces, I think that restricting it to
CC is a reasonable compromise from a practical point of view, since
otherwise we'd need up to 5 additional hypotheses to specify the
division ring components.  In any case, most proofs would be essentially
the same if we need that generality in the future, so much of the hard
work would already be done.  There may even be an additional advantage
to doing it with CC first:  the CC proofs would tell us the minimal
number of ring theorems we would need for the more general development,
so that we could get there more quickly.

Steve Rodriguez sent in his ncvcn of 4-Dec-2006 at a fortuitous time,
because it provided the special case needed for the weak deduction
theorem dedth used in the imsms proof.


(4-Dec-2006) vcm shows that we can equivalently define the inverse of
the underlying group in a complex vector space as either the group
inverse or minus 1 times a vector.  This shows that the requirement of
an underlying Abelian group is not necessary; it could be instead an
Abelian monoid (which generalizes an Abelian group by omitting the
requirement for inverse elements), although I didn't see any mention of
that in the literature.  In any case, for future theorems I am thinking
of using mostly minus 1 times a vector in order to be compatible with
the Hilbert Space Explorer, which does not postulate a negative vector
as part of its axioms, since it can be derived from the scalar product
in the same way as vcm does.  We can use vcm to obtain the other
approach.


(1-Dec-2006) dvdemo1 and dvdemo2 are discussed at:
http://planetx.cc.vt.edu/AsteroidMeta/U2ProofVerificationEngine


(17-Nov-2006) ra4sbc eliminates the hypothesis of ra4sbcf, making the
latter obsolete (and it will be deleted eventually).  It will also make
ra4sbcgf - renamed to ra4sbcgfOLD - obsolete, since its first antecedent
is now redundant.  (Kind of sad, because ra4sbcgfOLD used some clever
tricks to convert the hypothesis of ra4sbcfOLD to an antecedent; looking
at it again, I don't know if I could ever figure it out again.  Oh
well.) ra4sbc will also eliminate the distinct variable restriction x,A
in ra4sbca and ra4csbela (the preveious versions of which have been
renamed to ra4sbcaOLD and ra4csbelaOLD).


(15-Nov-2006) The redundant Separation, Empty Set, and Pairing axioms of
ZF set theory were separated out so that their uses can be identified
more easily.  After each one is derived, it is duplicated as a new
axiom:

  Immediately after axsep ($p), we introduce ax-sep ($a)
  Immediately after axnul ($p), we introduce ax-nul ($a)
  Immediately after axpr ($p), we introduce ax-pr ($a)

To go back to the "old way" that minimizes the number of axioms, we
would just delete each $a and replace all references to it with the $p
immediately above it.  Thus we can easily go back and forth between two
approaches, as our preference dictates:  a minimal ZF axiomatization or
a more traditional one that includes the redundant axioms.


(9-Nov-2006) An interesting curiosity:  I updated the longest path in
the "2+2 trivia" section on the Metamath Proof Explorer home page, and
the longest path changed dramatically.  The path length increased from
132 to 137 - an occasional increase is to be expected, as over time new
theorems (common subproofs) are found that shorten multiple proofs.  The
curious thing is that in the old path, not a single theorem of predicate
calculus was in the list:  it jumped over predicate calculus completely
with the path: eqeq1 (set theory) <- bibi1d (prop. calc.).  However, the
new path has 22 theorems of predicate calculus, mostly uniqueness and
substitution stuff.  This was caused by the change of 12-Sep-2006 (see
notes for that date below) that provided a different path for proving
0ex.  Here is the old path for comparison:

  The maximum path length is 132.  A longest path is:  2p2e4 <- 2cn <-
  2re <- readdcl <- axaddrcl <- addresr <- 0idsr <- addsrpr <- enrer <-
  addcanpr <- ltapr <- ltaprlem <- ltexpri <- ltexprlem7 <- ltaddpr <-
  addclpr <- addclprlem2 <- addclprlem1 <- ltrpq <- recclpq <- recidpq <-
  recmulpq <- mulcompq <- dmmulpq <- mulclpq <- mulpipq <- enqer <-
  mulasspi <- nnmass <- omass <- odi <- om00el <- om00 <- omword1 <-
  omwordi <- omword <- omord2 <- omordi <- oaword1 <- oaword <- oacan <-
  oaord <- oaordi <- oalim <- rdglim2a <- rdglim2 <- rdglimt <- rdglim <-
  rdgfnon <- tfr1 <- tfrlem13 <- tfrlem12 <- tfrlem11 <- tfrlem9 <-
  tfrlem7 <- tfrlem5 <- tfrlem2 <- tfrlem1 <- tfis2 <- tfis2f <- tfis <-
  tfi <- onsst <- ordsson <- ordeleqon <- onprc <- ordon <- ordtri3or <-
  ordsseleq <- ordelssne <- tz7.7 <- tz7.5 <- wefrc <- epfrc <- epel <-
  epelc <- brab <- brabg <- opelopabg <- opabid <- opex <- prex <- zfpair
  <- 0inp0 <- 0nep0 <- snnz <- snid <- snidb <- snidg <- elsncg <- dfsn2
  <- unidm <- uneqri <- elun <- elab2g <- elabg <- elabgf <- vtoclgf <-
  hbeleq <- hbel <- hbeq <- hblem <- eleq1 <- eqeq2 <- eqeq1 <- bibi1d <-
  bibi2d <- imbi1d <- imbi2d <- pm5.74d <- pm5.74 <- anim12d <- prth <-
  imp4b <- imp4a <- impexp <- imbi1i <- impbi <- bi3 <- expi <- expt <-
  pm3.2im <- con2d <- con2 <- nega <- pm2.18 <- pm2.43i <- pm2.43 <-
  pm2.27 <- id <- mpd <- a2i <- ax-1


(8-Nov-2006) The fact that dtru (and thus ax-16) can be proved without
using ax-16 came as something of a surprise.  Still open is whether
ax-16 can be derived from ax-1 through ax-15 and ax-17.

(Later...)  Well, it turns out ax-16 can be derived from ax-1 through
ax-15 and ax-17!  That is a complete surprise.  The "secret" lies in
aev, which is a nice little theorem in itself.  I've updated the
mmset.html page - it's not very often that a new result is found about
the axiom system.  Perhaps I'll still leave in the dtruALT proof since
it is an interesting exercise in predicate logic without the luxury of
definitions, although I might delete it since it is not very important
anymore.


(4-Nov-2006) To simplify the notation - which is still quite awkward - I
decided specialize vector spaces to complex fields, instead of defining
vector spaces on arbitrary division rings, since that is what I expect
we will use most frequently.  If we need to generalize later, most
proofs should be nearly the same.


(3-Nov-2006) isgrp and grplidinv replace the older versions, but use the
hypothesis "X = ran G" instead of "X = dom dom G".  This allows us to
eliminate the 5 theorems with the "X = dom dom G" hypothesis, and all
theorems with that hypothesis have now been deleted from the database.


(31-Oct-2006) All group theory theorems (except the first 5 leading up
to grprn) were re-proved with "X = ran G" instead of "X = dom dom G" as
the hypothesis.


(29-Oct-2006) Steve Rodriguez provided a shorter proof (by 190 bytes in
the compressed proof size and by 39377 bytes in the HTML page size) for
efnn0valtlem (the lemma for his efnn0valt).


(26-Oct-2006) See
http://planetx.cc.vt.edu/AsteroidMeta/Translation_Systems for discussion
related to isarep1 and isarep2.


(25-Oct-2006) Most books (at least the ones I looked at) that define
a group with only left identity/inverse elements appear to implicitly
assume uniqueness when they derive the right identity/inverse elements,
but you need the right identity/inverse elements to prove uniqueness.
This makes our proof, which involves careful quantifier manipulations to
circumvent circular reasoning, much more complicated than the ones in
textbooks.  I don't know of a simpler way to do it.


(22-Oct-2006) I remind the reader of the entry from (17-May-2006) below
called "Dirac bra-ket notation deciphered."

kbass6t completes the associative law series kbass1t-kbass6t.  I moved
them to one place in the database for easier comparison:
http://us2.metamath.org:8888/mpegif/mmtheorems80.html#kbass1t


(19-Oct-2006) The mmnotes.txt entry of (4-Sep-2006) describes the
general philosophy I have settled on for structures like metric spaces,
which seems to be working out well:

    hyp.1 $e |- X = ( 1st ` M ) $.
    hyp.2 $e |- D = ( 2nd ` M ) $.
    xxx   $p |- (metric space theorem involving M, X, D) $=...

For topologies, the "pure" approach analogous to metric spaces would be
to work with topological spaces df-topsp, which defines topological
structures as ordered pairs S = <. X , J >..  We would then have e.g.
(hypothetical example not in the database):

    1openA.1 $e |- X = ( 1st ` S ) $.
    1openA.2 $e |- J = ( 2nd ` S ) $.
    1openA   $p |- ( S e. TopSp -> X e. J ) $=...

However, I am treating topological spaces in a different way because it
is easy to recover the underlying set from the topology on it (just take
the union).  So theorems can be shortened as follows, still separating
the topology from the underlying set in the theorem itself:

    1open.1 $e |- X = U. J $.
    1open   $p |- ( J e. Top -> X e. J ) $=...

This last is the standard I am adopting for the special case of
topologies.  It saves a little bit of space in set.mm.  Switching to the
"pure" approach in the hypothetical 1openA would be trivial if we ever
wanted to do that for aesthetic consistency or whatever.

I bring this up because yesterday's grpass shows that we are taking a
similar approach for group theory, where the underlying set can be
recovered from the domain of the group operation:  X = dom dom Grp.
Again, it would be trivial to convert all theorems to the "pure"
approach if for some reason we wanted to do that in the future.


(1-Oct-2006) Note the parsing of ac9s.  The infinite Cartesian product
X_ x e. A ... takes a class (B in this case) and produces another class
(X_ x e. A B).  Restricted quantification A. x e. A ..., on the other hand,
takes a wff (B =/= (/)) and produces another wff (A. x e. A B =/= (/)).

    ac9s $p |- ( A. x e. A B =/= (/) <-> X_ x e. A B =/= (/) )
                           <------->     <--------->
                             wff            class
                 <----------------->     <----------------->
                       wff                   wff

If we were to use additional parenthesis (which are unnecessary for
unambiguous parsing), ac9s would read:

    ac9s $p |- ( A. x e. A ( B =/= (/) ) <-> ( X_ x e. A B ) =/= (/) )

So far in the database, the following definitions with "restricted"
bound variables take a class and produce a class:

    df-iun       U_ x e. A B
    df-iin     |^|_ x e. A B
    df-ixp       X_ x e. A B
    df-sum     sum_ x e. A B

If we wanted, we could define these surrounded by parentheses to
eliminate any possible confusion.  No proofs would have to be changed,
only the theorem statements.  However, there are already too many
parentheses in a lot of theorems.  Since the parenthesis-free notation
for these is unambiguous, I thought it would be best in the long run.
It's just a matter of getting used to it, and if in doubt one can always
consult the syntax hints or use "show proof ... /all".

A different example of this kind of possible confusion is sbcel1g:

    ( [ A / x ] B e. C <-> [_ A / x ]_ B e. C )
                <---->     <----------->
                 wff          class
      <-------------->     <---------------->
           wff                 wff

which is never ambiguous because of the different brackets:  [ A / x ]
takes a wff as an argument, and [_ A / x ]_ takes a class as an
argument.


(29-Sep-2006) eluniima allows us to reduce alephfp from 72 steps to 62
steps.  Compare the older version still at
http://us.metamath.org/mpegif/alephfp.html .  (I revisited alephfp
after the discussion on http://planetx.cc.vt.edu/AsteroidMeta/metamath ).
eluniima is interesting because there aren't any restrictions on A,
which can be completely unrelated to the domain of F.

rankxplim and rankxpsuc provide the answer to part of Exercise 14 of
Kunen, which asks the reader to "compute" the rank of a cross product.
(Some of the other ones can almost be "computed" - you take the previous
answer and add 1 - but it is a stretch to call this proof a
"computation".)  This is a very difficult and rather unfriendly problem
to give as a "homework exercise" - at least the answer should have been
provided as a clue to work out the proof, which is already hard enough
(especially since the answer has two parts, or three if we count the
empty cross product).  I wasted a lot of time on it, because I had to
prove something that I had no clue of what it would be.  I wonder how many
people have actually worked this out:  no one in sci.logic seemed able
to answer it.
http://groups.google.com/group/sci.logic/browse_frm/thread/41fad0ba18a9dce1

df-ixp is new.  I'm somewhat torn about the bold X - a capital Pi
is used in many books, but as the comment says I'd prefer to reserve
that for products of numbers.  I'm open to comments on the notation.


(28-Sep-2006) I decided to restore the ancient (12-year-old) proof of
pwpw0 for "historical" reasons (see discussion at
http://planetx.cc.vt.edu/AsteroidMeta/metamath ).  It has actually been
modernized slightly, to remove the requirement that the empty set exist.
This eliminates the need for the Axiom of Replacement, from which
empty set existence is derived.  The original can be seen at
http://de2.metamath.org/metamath/set.mm .

rankuni improves rankuniOLD of 17-Sep by eliminating the unnecessary
hypothesis A e. V.  Although this will shorten future proofs, I
don't know know if such shortening will end up "paying" for the extra 16
steps of overhead needed to eliminate A e. V.  But at least rankuni will
be easier to use than rankuniOLD, having one less condition to satisfy.


(17-Sep-2006) foprab2 is a new version (of foprab2OLD) that no longer
requires the "C e. V" hypothesis.  The new proof, using the 1st and 2nd
functions, is very different from that of foprab2OLD and the other
*oprab* theorems.


(16-Sep-2006) Steve Rodriguez says about his efnn0valtlem/efnn0valt
proof, "It's not short, but it took far less time than I expected, and
the result seemed so obvious that I felt nagged to prove it somehow."


(15-Sep-2006) opntop is an important theorem, because it connects metric
spaces to our earlier work on topology, by showing that a metric space
is a special case of a topology.  This lets us apply the theorems we
have already developed for topologies to metric spaces.  (It took some
work to get there; many of the theorems in the last few days where
towards the goal of proving opntop.)

The members of a topology J are called its "open sets" in textbooks, and
this theorem provides a motivation for that term.  (We do not have a
separate definition for the open sets of a topology, since to say that A
is an open set of topology J we just say "A e. J".)


(12-Sep-2006) A number of proofs (some not shown in the Most Recent
list) were modified to better separate the various uses of the Axiom of
Replacement, and in particular to show where the Axiom of Extensionality
is needed.  The old zfaus was renamed to zfauscl, and the current zfaus
is new.

Some proofs in the axrep1 through axrep5 sequence were modified to
remove uses of Extensionality, so that zfaus now uses only Replacement
for its derivation.  The empty set existence zfnul now uses only zfaus
(and thus only Replacement) for its derivation.  The new zfnuleu then
shows how Extensionality leads to uniqueness (via the very useful bm1.1,
which uses only Extensionality for its derivation).  Finally, 0ex was
changed (with a slightly longer proof) so that it is now derived
directly from zfnuleu, to illustrate the path:

  ax-rep -> zfaus -> zfnul -> zfnuleu -> 0ex
                                 ^
                                 |
                               ax-ext

Some books try to postpone or avoid the Replacement Axiom when possible,
using only the weaker Separation (a.k.a.  Subset, a.k.a.  Aussonderung).
This can now be done in our database, if we wish, by changing zfaus and
zfpair from theorems to axioms.  (See the new last paragraph in the
ax-rep description.)


(7-Sep-2006) The set.mm database was reorganized so that the ZFC axioms
are introduced more or less as required, as you can see in the new Table
of Contents http://us2.metamath.org:8888/mpegif/mmtheorems.html#mmtc .
This lets you see what it is possible to prove by omitting certain
axioms.  For example, we prove almost all of elementary set theory (that
covered by Venn diagrams, etc.) using only the Axiom of Extensionality,
i.e. without any of the existence axioms.  And quite a bit is proved
without Infinity - for example, Peano's postulates, finite recursion,
and the Schroeder-Bernstein theorem (all of which are proved assuming
Infinity in many or most textbooks).


(4-Sep-2006) I will be changing the way that the theorems about metric
spaces are expressed to address some inconveniences.

Consider the following two ways of expressing "the distance function of
a metric space is symmetric".  In the present database, we use both
methods for various theorems.  (These examples, though, are hypothetical,
except that mssym1v1 = mssymt).

 (1) mssymv1.1 $e |- D e. V $.
     mssymv1 $p |- ( ( <. X , D >. e. Met /\ A e. X /\ B e. X ) ->
                   ( A D B ) = ( B D A ) ) $=

 (2) mssymv2 $p |- ( ( M e. Met /\ A e. ( 1st ` M ) /\ B e. ( 1st ` M ) ) ->
                   ( A ( 2nd ` M ) B ) = ( B ( 2nd ` M ) A ) ) $=

The first way, mssymv1, shows the base set and the distance function
explicitly with the helpful letters X and D.

But often we want to say things about the metric space as a whole, not
just its components.  The second way, mssymv2, accomplishes that goal,
at the expense of readability:  it is less reader-friendly and more
verbose to say ( 2nd ` M ) rather than just D.

Although it is possible to convert from one to the other, it can be
awkward, especially converting from mssymv1 to mssymv2.  So practically
speaking, we will end up creating two versions of the same theorem,
neither of which is ideal.

A solution to this is provided by the following third version:

 (3) mssymv3.1 $e |- X = ( 1st ` M ) $.
     mssymv3.2 $e |- D = ( 2nd ` M ) $.
     mssymv3 $p |- ( ( M e. Met /\ A e. X /\ B e. X ) ->
                   ( A D B ) = ( B D A ) ) $=

The conclusion is simpler than either of the first two versions and
clearly indicates the intended meaning of an object with letters M, X,
and D.  Although the hypotheses are more complex, in the database they
will typically be resused by several theorems.

To obtain mssymv1 from mssymv3 is trivial:  we replace M with
<. X , D >., then we use op1st and op2nd to eliminate the hypotheses.

To obtain mssymv2 from mssymv3 is trivial:  we use eqid to eliminate
the hypotheses.

So, with this approach, we should never need to prove mssymv1 and
mssymv2 separately, since the conversion to either one in a proof is
immediate.

My plan is to convert everything to this approach and make most of the
existing theorems obsolete.  msf is the first one using this approach,
and it will replace the existing msft.  As always, comments are
welcome.


(3-Sep-2006) Although ax16b is utterly trivial, its purpose is simply to
support the statement made in the 7th paragraph of
http://us2.metamath.org:8888/mpegif/mmzfcnd.html


(29-Aug-2006) The value of the ball function is a two-place function,
i.e. it takes in two arguments, a point and a radius, and returns a set
of points.  I define it as an "operation" in order to make use of the
large collection of operation theorems, and also to avoid introducing a
new syntactical form.  However, we have two choices for expressing "The
value of a ball of radius R around a point P".  Note that M is a metric
space, X is the underlying space of a metric space, and D is a distance
function.

  Operation value notation:

    ( P ( ball ` M ) R )
    ( P ( ball ` <. X , D >. ) R )      when M = <. X , D >.

  Function value notation:

    ( ( ball ` M ) ` <. P , R >. )
    ( ( ball ` <. X , D >. )  ` <. P , R >. )     when M = <. X , D >.

The former is shorter and will result in shorter proofs in general, but
I'm not sure that using infix notation like we would for an operation
like "+" is the most natural or familiar.  There is no standard
notation in the literature, which uses English and also does not make
the metric space explicit.  I am open to comments or suggestions.

Since ( ball ` <. X , D >. ) acts like an operation value, we also have
a third choice and could say, equivalently,

    ( P ( X ball D ) R )      when M = <. X , D >.

for an even shorter notation, although I'm not sure how odd it looks.
But maybe I should use it for efficiency.

Note that I am using <. X , D >. e. Met instead of X Met D (via df-br),
even though the latter results in shorter proofs (and is shorter to
state).  It seems that Met feels more like a collection of structures
that happen to be ordered pairs of objects, than it does a relation,
even though those concepts are technically identical.


(28-Aug-2006) Two of the standard axioms for a metric space, that the
distance function is nonnegative and that the distance function is
reflexive, are redundant, so they have been taken out of the definition
df-ms to simplify it.  (We will prove the redundant axioms later as
theorems.)

The first part of the ismsg proof (through step 18) is used to get rid
of the antecedent X e. V that occurs in step 43.  (If either side of the
ismsg biconditional is true, it will imply X e. V, making it redundant
as an antecendent.)


(26-Aug-2006) Some small items related to yesterday's unctb (with a new
version today) were cleaned up:

  1. The unused hypothesis B e. V was removed from unictb.
  2. unpr was renamed to unipr for naming consistency.
  3. The hypotheses A e. V and B e. V eliminated from unctb, with the help
     of the new theorem uniprg.


(22-Aug-2006) csbopeq1a will help make the '1st' and '2nd' function
stuff worthwhile; it lets us avoid the existential quantifiers that are
used in e.g. copsexg.  It is often easier to work with direct
computations rather than having to mess around with quantifiers.  I was
surprised that there are no distinct variable, class existence, or any
other restrictions on csbopeq1a.  There is no restriction on what B may
contain, which could be any random usage of x and y, not just ordered
pairs of them; but what we are doing is the logical equivalent of
substitution for ordered pairs in B as if it actually contained them.


(21-Aug-2006) I made some subtle changes to the little colored numbers.
Although this may seem like a trivial topic, and probably is, the
problems of obtaining a spectrum of colors with uniform brightness and
maximum distinguishability vs. hue changes aren't as easy as they might
first appear.  Perhaps someone has done it before, but all of the
spectrum mappings I could find have brightness that varies with color
and aren't suitable for fonts, in particular because yellow is hard to
read.  They also do not change color (as perceived by the eye) at a
uniform rate as you go through the spectrum, such as the color changes
crowded into the transition from red to green.  This is OK if you want
an accurate representation of color vs. wavelength, but our goal is to
be able to distinguish different colors visually in an optimal way.

Anyway I thought it was an interesting problem, so I thought I'd say
something about it.  Even though it pales in importance compared to
today's announcement of the evidence that dark matter exists. :)

The new "rainbow" color scheme now has the following properties:

1. All colors now have 50% grayscale levels.  Specifically, all colors
now have an L (level) value of 53 in the L*a*b color model (see
http://en.wikipedia.org/wiki/Lab_color).  This means if you convert a
screenshot to grayscale, the colored numbers will all have the same
shade of gray.  Yellow becomes brown at L=53, with an RGB value of
131-131-0.

2. Within the constraint of the 50% grayscale level, each color has
maximum saturation.  This is not as simple as it seems:  the RGB color
for the brightest pure blue, 0-0-255, has an L value of only 30 (because
the eye is not as sensitive to blue), so we have to use the 57%
saturated 110-110-255 to get L=53.  Green, on the other hand, just
requires 0-148-0 for L=53, which is 100% saturated.

3. The hues are not equally spaced numerically, but according to how the
eye is able to distinguish them.  I determined this empirically by
comparing the distinguishability of adjacent hues on an LCD monitor,
using a program I wrote for that purpose.  For example, the eye can
distinguish more hues between red and green than between green and blue.
This was a problem with the old colors, which seemed to have too many
undistinguishable blue-greens.  Now, as experimentally determined, the
transition from green to blue represents only 21% of the color values.

You can see the new color spectrum at the top of a theorem list page
such as http://us2.metamath.org:8888/mpegif/mmtheorems.html.

The spectrum position to RGB conversion is done by the function
spectrumToRGB in mmwtex.c of Metamath version 0.07.21 (20-Aug-2006).


(16-Aug-2006) abfii5 is the last in the series, and just chains together
the others to obtain the final connection between verion used by subbas
and the version of the left-hand side that does not involve
equinumerosity.  This can allow us to express subbas in more elementary
terms, if we wish.


(14-Aug-2006) abfii4 is an interesting brainteaser:  we show the that

  |^| { x | ( A (_ x /\ A. y ( ( y (_ x /\ -. y = (/) /\
                                   E. z e. om y ~~ z ) -> |^| y e. x ) ) }

is equal to

  |^| { x | ( A (_ x /\ A. y ( ( y (_ A /\ -. y = (/) /\
                                   E. z e. om y ~~ z ) -> |^| y e. x ) ) }

The second differs from the first by only one symbol:  the "x" is
changed to "A".  At first, it superficially looked like this would be an
elementary theorem involving set intersection.  But the proof turned out
quite difficult, not only the proof itself but the fact that it needs
some subtheorems that are somewhat difficult or nonintuitive in
themselves (fiint, intab via abfii2, abfii3, abexssex [based on the
Axiom of Replacement], intmin3), involving the theories of ordinals,
equinumerosity, and finite sets.  abfii2 and abfii3 are each used twice,
for different purposes, in different parts of the proof.  I found this
quite a challenge to prove and would be most interested if anyone sees a
more direct way of proving this.  (Note that "E. z e. om y ~~ z" just
means "y is finite".  A simpler way of stating this is "y ~< om", but
that requires the Axiom of Infinity, which I wanted - and was able - to
avoid for this proof.)

abfii4 arose while looking at equivalent ways to express the collection
of finite intersections of a set, which determines a basis for a
topology (see theorem subbas).  Textbooks never seem to mention the
exact formal ways of saying this, but just say, informally, "the
collection of finite intersections."


(11-Aug-2006) With today's 3bitr4rd, we now have the complete set of all
8 possibilities for each of the 3-chained-biconditional/equality series
3bitr*, 3bitr*d, 3eqtr*, and 3eqtr*d.


(9-Aug-2006) If you are wondering why it is "topGen" and not "TopGen", a
standard I've been loosely following is to use lowercase for the first
letter of classes that are ordinarily used as functions, such as sin,
cos, rank, etc.  Of course you will see many exceptions, mainly because
I try to match the literature when the literature gives something a
name.


(7-Aug-2006) intab is one of the oddest "elementary" set theory theorems
I've seen.  It took a while to convince myself it was true before I
started to work out the proof, and the proof ended up rather long and
difficult for this kind of theorem.  Yet the only set theory axiom used
is Extensionality.  Perhaps I just have a mental block - if anyone sees
a simpler proof let me know.  I haven't seen this theorem in a book;
it arose as part of something else I'm working on.

Two new definitions were added to Hilbert space, df-0op and df-iop.
The expressions ( H~ X. 0H ) and ( I |` H~ ), which are equivalent to
them, have been used frequently enough to justify these definitions.


(4-Aug-2006) 3eqtrr, 3eqtr2, 3eqtr2r complete the 8 possibilities for 3
chained equalities, supplementing the existing 3eqtr, 3eqtr3, 3eqtr3r,
3eqtr4, 3eqtr4r.  I was disappointed that the 3 new ones reduce the size
of only 33 (compressed) proofs, taking away a total of 185 characters
from them, whereas the new theorems by themselves increase the database
size by 602 characters.  So, the database will grow by a net 417
characters, and the new ones don't "pay" for themselves.  Nonetheless,
O'Cat convinced me that they should be added anyway for completeness.
He wrote:

  I would vote for adding them even though the net change is PLUS 400 some
  bytes.

  It just makes unification via utilities like mmj2 much easier -- input
  the formula and let the program find a matching assertion.

  Esp. now that you've done the work to analyze them, it is illogical not
  to make the change:  eventually enough theorems will be added to save
  the added bytes.  And within 10 years when we have bigger, faster
  memories people would think it quaint to be so stingy with theorems that
  serve a valid purpose.

  We're going to have PC's with a minimum of 1GB RAM next year as
  standard, and that will just grow as the new nanotech advances.

So, I guess I should also complete the 3eqtr*d, 3bitr*, and 3bitr*d
families at some point.


(2-Aug-2006) fiint replaces fiintOLD.


(24-Jul-2006) Note that the description of today's sucxpdom says it has
"a proof using AC", and you can see in the list of axioms below the
proof that ax-ac was used.  Exercise:  How does ax-ac end up getting
used?

Metamath has no direct command to show you this, but it is usually easy
to find by examining the outputs of the following two commands:

1. show trace_back sucxpdom

  ... numth2 fodom fodomg fnrndomg ccrd($a) df-card($a) cardval cardon cardid
  cardne carden carddomi carddom cardsdom domtri entri entri2 sucdom unxpdomlem
  unxpdom
  ^^^^^^^

2. show usage ax-ac / recursive

  ...imadomg fnrndomg unidom unidomg uniimadom uniimadomf iundom cardval cardon
  cardid oncard cardne carden cardeq0 cardsn carddomi carddom cardsdom domtri
  entri entri2 entri3 sucdom unxpdomlem unxpdom unxpdom2 sucxpdom...
                                        ^^^^^^^

The theorem we want is the last theorem in the first list that appears
in the second list.  In this case, it is unxpdom.  And, indeed, we see
that unxpdom appears in proof step 43 of sucxpdom, and we can check that
unxpdom uses ax-ac for its proof.

Of course, there may be other theorems used by the sucxpdom proof that
require ax-ac, and the only way to determine that is to see if any of
the other theorems used in the sucxpdom proof appear in the second list.
But for a quick indication of what ax-ac is needed for, the method above
can be useful.


(21-Jul-2006) oev2, etc. are part of Mr. O'Cat's cleanup project for
some of the remaining *OLDs.


(20-Jul-2006) istps5 is important because it reassures us that our
definitions of Top and TopSp match exactly the standard textbook
version, even though the latter is more complex when the words are
converted to symbols.  I don't think istps5 will have an actual
practical use, though, because of the simpler theorems we have
available.


(18-Jul-2006) It is awkward to eliminate the "J e. V" hypothesis from
istop and isbasis/isbasis2 when it is redundant (as in most uses) - it
requires working with a dummy variable then elevating it to a class
variable with vtoclga - so I'm replacing these with "g" versions that
have "J e. V" as an antecedent (actually "J e. A" as customary, to allow
easy use of ibi when we need only the forward implication).  I think the
non-g versions will be used so rarely that it's not worth keeping them,
so they will be deleted, and it is trivial to use instead the "g"
version + ax-mp.  [Update:  istop, isbasis, and isbasis2 have now been
deleted.]

I also modified Stefan's 0opn, uniopn to use an antecedent instead
of hypothesis.

(17-Jul-2006) It is interesting that Munkres' definition of "a basis for
a topology" can be shortened considerably.  Compare
http://us2.metamath.org:8888/mpegif/isbasis3g.html (Munkres' version)
with http://us2.metamath.org:8888/mpegif/isbasisg.html (abbreviated
version).  Munkres' English-language definition is (p. 78):

  "Definition.  If X is a set, a _basis_ for a topology on X is a
  collection _B_ of subsets of X (called _basis elements_) such that

    (1) For each x e. X there is at least one basis element B containing x.

    (2) If x belongs to the intersection of two basis elements B1 and B2,
    then there is a basis element B3 containing x such that B3 (_ B1 i^i B2."

-------

The symbols and statement labels for topology were changed
in order to conform more closely to the terminology in Munkres, who
calls a member of our (old) Open a "topology" and calls a member of our
(old) Top a "topological space".

  Old symbol      New symbol
  ----------      ----------
  Top             TopSp
  Open            Top

  Old label       New label
  ---------       ---------
  ctop            ctps
  cope            ctop
  df-top          df-topsp
  df-open         df-top
  dfopen2         dftop2
  optop           eltopsp
  elopen1         istop
  elopen2         uniopnt
  op-empty        0opnt
  empty-op        sn0top
  op-indis        indistop

-------

If J is a topology on a set X, then X is equal to the union of J.  For
this reason, the first member of a topological space <X,J> is redundant.
I am doubtful that the topological space definition will offer any real
benefit and am considering deleting it.  If anyone knows of a good
reason to keep it, let me know.


(12-Jul-2006) I added definitions for topology (df-open and df-top),
and added some theorems that Stefan Allan proved back in Feb. and
March.  Note that "e. Open" and "e. Top" are equivalent ways of saying
something is a topology, which will be shown by the (in progress)
theorem "( <. U. A , A >. e. Top <-> A e. Open )", and "e. Open" is
often simpler.


(7-Jul-2006) Over 100 *OLD's were removed from the database,
and 83 remain, 27 of which are in the Hilbert space section.  After
subtracting the *OLD's, there are 6053 theorems in the non-Hilbert-space
part and 1181 for Hilbert space, a total of 7243.

This is the first time I have become aware that the Metamath Proof
Explorer has officially passed the 6000 theorem "milestone", which
happened 53 (green) theorems ago.  The 6000th theorem was mt4d, added on
18-Jun-2006.


(6-Jul-2006) With the updated projlem31, all uses of ~~>OLD have been
eliminated.  Soon I will remove the corresponding *OLD theorems from
the database.

(5-Jul-2006) With ege2le3, we have finally completed the
non-Hilbert-space conversion of ~~>OLD based theorems, so that no
theorem in the non-Hilbert-space section depends on ~~>OLD anymore.  We
can't delete their *OLD theorems yet, because ~~>OLD is still used in a
few places in the Hilbert space section, but that should happen soon.
(If you type "show usage cliOLD/recursive" you will see, before the
Hilbert space section, the *OLDs that will be deleted.  I will delete
them when the Hilbert space stuff is finished being converted to ~~>.)

By the way, we can also now delete all uses of sum1oo ("show usage
csuOLD /recursive"), which I will do soon.


(4-Jul-2006) Currently there are separate derivations of 2 <_ e,
e <_ 3, and 2 <_ e /\ e <_ 3.  Eventually, I'll probably delete the
first two, but for now my immediate goal is to replace the *OLDs.


(1-Jul-2006) The proof of class2seteq was shortened.  Compare the
previous version, class2seteqOLD.


(30-Jun-2006) Continuing with the *OLD cleanup project, the future
ereALT (to replace the existing ereALTOLD) will be an alternate
derivation of ere.  The proof of ereALTOLD specifically focuses on the
number e and has a much simpler overall derivation than ere, which is a
special case of the general exponential function derivation.  The lemmas
for ereALTOLD are mostly reused to derive the bounds on e (ege2OLD,
ele3OLD, ege2le3OLD).  Since ereALTOLD ends up being a natural byproduct
of those theorems, I have so far kept it even though it is redundant, in
order to illustrate a simpler alternate way to prove e is real.

By the way, if you are wondering how the mmrecent page can "predict" a
future theorem, the web page generation program simply puts "(future)"
after the comment markup for any label not found in the database (and
also issues a warning to the user running the program).  The mmrecent
page is refreshed with the "write recent" command in the metamath
program.


(21-Jun-2006) The conversion of cvgcmp3ce to cvgcmp3cet is one of our
most complex uses to date of the Weak Deduction Theorem, involving
techniques that convert quantified hypotheses to antecedents.  The
conversion is performed in stages with 2 lemmas.  (Eventually I want to
look at proving the theorem form directly, with the hope of reducing the
overall proof size.  But for now my main goal is to replace the *OLDs.)


(20-Jun-2006) As mentioned below, most of the recent convergence stuff
like cvgcmp3ce is slowly replacing the *OLDs.  My goal is to be able to
delete a big batch of *OLDs, used to develop the exponential funtion
df-ef, within 2 weeks.  We can't get rid of them until the last piece in
the chain of proofs is completed.


(6-Jun-2006) caucvg will replace caucvgOLD as part of the *OLD
elimination project.  BTW Prof. Wachsmuth doesn't know where this proof
is from; he may have originated it:  "I wrote many of the proofs years
ago and I don't remember a good reference for this particular one"
(email, 6-Jun).  It is a nice proof to formalize because it is more
elementary than most, in particular avoiding lim sups.  (We have
df-limsup defined, but it needs to be developed.  As you can see,
df-limsup is not so trivial.  One of its advantages is that it is
defined - i.e. is an extended real - for all sequences, no matter how
ill-behaved.)


(5-Jun-2006) kbass2t is the 2nd bra-ket associative law mentioned in
the notes of 17-May-2006.


(4-Jun-2006) The description of ax-un was made clearer based on a
suggestion from Mel O'Cat.


(2-Jun-2006) unierr is our first theorem related to qubits and quantum
computing.  In a quantum computer, an algorithm is implemented by
applying a sequence of unitary operations to the computer's qubit
register.  A finite number of gates selected from a fixed, finite set
cannot implement an arbitrary unitary operation exactly, since the set
of unitary operations is continuous.  However, there is a small set of
quantum gates (unitary operations) that is "universal," analogous to the
universal set AND and NOT for classical computers, in the sense that any
unitary operation may be approximated to arbitrary accuracy by a quantum
circuit involving only those gates.  Theorem unierr tells us,
importantly, that the worst-case errors involved with such
approximations are only additive.  This means that small errors won't
"blow up" and destroy the result in the way that, say, a tiny
perturbation can cause completely unpredictable behavior in weather
prediction (the "butterfly effect").


(1-Jun-2006) Someone complained about not being able to understand infpn
(now called infpn2), so I decided to make the "official" infpn use only
elementary notation.


(27-May-2006) The label of the ubiquitous 'exp' (export) was changed to
'ex' to prevent it from matching the math token 'exp' (exponential
function), in anticipation of the 24-Jun Metamath language spec change.
Normally I don't mention label changes here - they're documented at the
top of set.mm - but ex, used 724 times, is the 5th most frequently used
theorem, so this change will almost certainly impact any project using
set.mm as a base.  Although ex is of course not new, I re-dated it to
show up in today's "Most Recent Proofs."


(25-May-2006) I think csbnest1g is interesting because the first
substitution is able to "break through" into the inside of the second
one in spite of the "clashing x's".  Compare csbnestg, where x and y
must be distinct.  I found it a little tricky to prove, and I wasn't
even sure if it was true at first.


(18-May-2006) kbasst proves one of the associative laws mentioned
yesterday:

  Dirac:  ( |A> <B| ) |C> = |A> ( <B| |C> )
  Metamath: ( ( A ketbra B ) ` C ) = ( ( ( bra ` B ) ` C ) .s A )


(17-May-2006)   Dirac bra-ket notation deciphered

Most quantum physics textbooks give a rather unsatisfactory, if not
misleading, description of the Dirac bra-ket notation.  Many books will
just say that <A|B> is defined as the inner product of A and B, or even
say that it _is_ the inner product, then go off and give mechanical
rules for formally manipulating its "components" <A| and |B>.  For
physicists, "formally" means "mechanically but without rigorous
justification."

If the dimensions are finite, there is no problem.  A finite-dimensional
Hilbert space is essentially linear algebra, and it is possible to prove
that any vector and linear operator can be represented by an n-tuple
(column vector) and matrix respectively.  In finite dimensions, we just
define |A> to be a column vector and <A| its conjugate transpose, a row
vector.  Viewed this way, the various combinations such as <A|B> (the
inner product) and |A><B| (a matrix) make sense in finite dimensions.

But what is the "transpose" of a vector in infinite-dimensional Hilbert
space?  To answer this, some of the more mathematically "rigorous" books
say that |B> is a member of Hilbert space (a vector) and call <A| a
member of a "dual space" of functionals, which defined such that the
value of functional <A| evaluated at vector |B> equals the inner product
of A and B.  While this solves some of the problems, mysteries remain,
such as, what is the "outer product" |A><B|?  They don't say (it is very
rare to find a QM book with "outer product" in the index), and they
evade the problem by implicitly assuming an (unproven) associative law
that allows them to manipulate their way out of having to explain it.
But the in-between stages of manipulation can contain undefined objects,
and while the mechanical rules "work," there isn't an obvious way to
formalize them.  In particular, this associative law cannot be proved
from Hilbert space axioms from the partial definitions given in most
books.

Maybe it's just the way my brain works, but I find it difficult to
become comfortable with a concept that isn't precisely defined, even if
it "works."  An advantage (as well as a frustration) of Metamath is
that it forces you to resolve any such issues if you want to make
progress.

Fortunately, I found a book by Eduard Prugovecki (pronounced
Proo-go-vetch-kee), _Quantum Mechanics in Hilbert Space_, that (on p.
376, 1981 ed.; p. 370, 1971 ed.) presents a defining equation for
|A><B|, which you can see as today's theorem kbvalvalt, from which we
can deduce a direct definition (df-kb) that works in both finite- and
infinite-dimensional Hilbert space.  With this final clue, I was able to
map precise set-theoretical expressions uniquely to and from all
possible Dirac bra and ket combinations.  The mapping is somewhat
complicated, but it is complete and well-defined.  In order to
accomplish this, I introduced two new definitions.  The "bra" function
takes in a vector and outputs a functional.  The "ketbra" operation
takes two vectors and outputs an operator.  Their definitions are given
by df-bra and df-kb.

  Added 18-May-06:  I noticed that the Wikipedia bra-ket article, added
  last year, also defines outer product (in the same way as Prugovecki).

Note that a "functional" is any function from H~ (Hilbert space) to CC
(complex numbers), and an "operator" is any function from H~ to H~.

The rule for combining bras and kets is that a bra may be placed after a
ket and vice-versa, but two bras and two kets may not be juxtaposed.  In
other words, bras and kets must alternate.  Juxtaposition can be thought
of roughly, but not exactly, as a kind of "product."  Juxtaposition is
associative:

  ( |A> <B| ) |C> = |A> ( <B| |C> )
  ( <A| |B> ) <C| = <A| ( |B> <C| )

(Added 24-May-2006) There is a technicality in our development.
Mathematicians, and set.mm, define inner product such that
( C x. ( A .i B ) ) = ( ( C .s A ) .i B ).  Physicists define it such
that ( C x. ( A .i B ) ) = ( A .i ( C .s B ) ) where ".s" is the scalar
product of a number and a vector and ".i" is the inner product of two
vectors.  See the description for ax-his3.  I used to think this was
arbitrary, with physicists having a slightly less natural definition for
no good reason.  (I have never seen a book explain why.)  It turns out
that the physicist definition is necessary for the bra-ket notation to
work!  Specifically, the second associative law above fails with the
mathematicians' definition.  Since set.mm adopts the mathematicians'
definition in order to be compatible with math books, we therefore will
consider the bra-ket <A|B> to be defined as ( B .i A ).  So now we have
the "best of both worlds" and can choose either <A|B> (which physicists
consider synonymous with inner product) or (A .i B) = <B|A>, to match
whatever text we're working with.

There are actually 4 kinds of objects that result from different bra and
ket juxtapositions:  complex numbers, vectors, functionals, and operators.
This is why juxtaposition is not "exactly" a product, because its
meaning depends on the kind of objects that the things being juxtaposed
represent.  The starting operations on vectors are as follows:

                                                             Finite dim.
   Operation       Notation  Metamath        Value           analogy

   ket             |A>       A               vector          column vector
   bra             <A|       ( bra ` A )     functional      row vector
   inner product   <A|B>     ( B .i A )      complex number  complex number
   outer product   |A><B|    ( A ketbra B )  operator        matrix

The inner product <A|B> can also be expressed as ( ( bra ` A ) ` B )
- as today's theorem bravalvalt shows - and this will be needed to use
the table below (in line 5).  (Lines 3 and 4 in the above table are
redundant, since they are special cases of lines 5 and 4 below; line 4
above is computed ( A ketbra ( `' bra ` ( bra ` B ) ) ) = ( A ketbra B ).)

We will represent the four kinds of possible results in the "Value"
column above as v, f, c, and o respectively.  After accounting for the
restrictions on juxtaposing bras and kets (e.g., we can never have an
inner product followed by a ket), exactly the following cases can occur
when two Dirac subexpressions T and U are juxtaposed to produce a new
Dirac expression TU:

  T  U  TU  Metamath operation           Description

  c  c   c  ( T x. U )                   Complex number multiplication
  c  f   f  ( T .fn U )                  Scalar product with a functional
  v  c   v  ( U .s T )                   Scalar product (note T & U swap)
  v  f*  o  ( T ketbra ( `' bra ` U ) )  Outer product (with converse bra)
  f  v   c  ( T ` U )                    Inner product (bra value)
  f  o   f  ( T o. U )                   Functional composed with operator
  o  v   v  ( T ` U )                    Value of an operator
  o  o   o  ( T o. U )                   Composition of two operators

  *Note:  In line 4, U must be a continuous linear functional (which will
  happen automatically if U results from a string of kets and bras).
  This is needed by the Riesz theorem riesz2t, which allows the
  inverse bra to work.  The other lines have no restriction.

  See df-hfmul for ".fn", df-co for "o.", and df-cnv for "`'".

  Line 5 can be stated equivalently:

    f* v   c  ( U .i ( `' bra ` T ) )      Inner product (with converse bra)

So, the "operation" of juxtaposition of two Dirac subexpressions can
actually be any one of 8 different operations!  This is why we can't
easily express the Dirac notation directly in Metamath, since a class
symbol for an operation is supposed to represent only one object.

Supplementary note:  Physics textbooks will often have equations with an
operator sandwiched between a bra and a ket.  Its juxtaposition with a
bra or ket also now becomes easy to formalize:  match an entry from the
table above where the operator corresponds to an "o" input.

As an example of the use of the above table, consider the associative
laws above and their set-theoretical (Metamath) translations, which
we will eventually prove as theorems in the database.

  Dirac:  ( |A> <B| ) |C> = |A> ( <B| |C> )
  Metamath: ( ( A ketbra B ) ` C ) = ( ( ( bra ` B ) ` C ) .s A )

  Dirac:  ( <A| |B> ) <C| = <A| ( |B> <C| )
  Metamath:  ( ( ( bra ` A ) ` B ) .fn ( bra ` C ) ) =
                        ( ( bra ` A ) o. ( B ketbra C ) )

Note that ( A ketbra B ) above is really ( A ketbra ( `' bra ` ( bra ` B ) ) )
- see table line 4 - but we will adopt the convention of canceling
converse-bra bra since ( `' bra ` ( bra ` B ) ) = B.  In some cases
we can't cancel:

  Dirac:  ( | A >. <. B | ) ( | C >. <. D | ) =
          | A >. ( <. B | ( | C >. <. D | ) )

  Metamath:  ( ( A ketbra B ) o. ( C ketbra D ) ) =
             ( A ketbra ( `' bra ` ( ( bra ` B ) o. ( C ketbra D ) ) ) ) )

There you have it, a complete formalization of Dirac notation in
infinite dimensional Hilbert space!  I've never seen this published
before.

For an intuitive feel for the table above, it can be useful to compare
the finite dimensional case using vectors and matrices.  Suppose A and
B are column vectors of complex numbers

     [a_1]      [b_1]
     [a_2]      [b_2]

Then |B> is the same as B, and <A| is the row vector [a_1* a_2*] (where
* means complex conjugate).

Then all the 8 Dirac operations can be justified with vector and matrix
multiplications.  For example, <A|B> becomes the inner product
a_1* x. b_1 + a_2* x. b_2, and |B><A| becomes the 2x2 matrix

     [b_1 x. a_1*   b_1 x. a_2*]
     [b_2 x. a_1*   b_2 x. a_2*]

(It should be mentioned that Dirac notation can also be rigorously
modeled by an algebra known as "rigged Hilbert space," which can be
useful for certain theoretical purposes.  However, rigged Hilbert space
is rather abstract and somewhat removed from the Hilbert space we are
working with.)


(11-May-06) The Description for today's sumeqfv
http://us2.metamath.org:8888/mpegif/sumeqfv.html mentions that A
represents A(k) for those used to the standard way is would be written
in a textbook, meaning that the class substituted for A would normally
have a free variable k in it.  How do we know that A(k) is intended?  It
can be inferred from the fact that A and k do not appear together in a
distinct variable group, i.e. there is no $d A k constraint.

In principle, all theorems could be converted to the A(k) textbook
notation above, and explicit distinct variable groups could be
eliminated, if we make the following assumptions:

1. Implicitly assume all set variables are distinct (like textbooks do).

2. Implicitly assume that a set variable does _not_ occur in an
expression represented by a wff or class variable unless it is present
in the variable's argument list.  E.g., j may _not_ occur in a class
expression substituted for A(k); y _may_ occur (although it doesn't have
to) in a wff expression substituted for ph(x,y,z); etc.  This is also
what textbooks usually assume.

With the above assumptions, we could define, for most theorems (with
an exception explained below), an unambiguous mapping from Metamath's
notation (with explicit $d's and implicit free-variable arguments) to
textbook notation (with implicit $d's and explicit free-variable
arguments) and vice-versa.  Possibly a future display method could do
such a translation automatically, so that that notation becomes more
familiar to mathematically experienced readers.

So why don't we just adopt the textbook-style notation as the underlying
standard that Metamath is based on?  The answer is that the proof
checker would be much more complicated.  Also, see the 2nd paragraph on
http://us2.metamath.org:8888/mpegif/df-sb.html for a problem related to
using this notation to represent substitution, although this could be
avoided (but with associated algorithmic complexity) if we were strict
about assumptions (1) and (2) above.

Of course, a more complex proof checker is no big deal for a computer,
once the program is written.  In fact most proof checkers other than
Metamath do use the explicit-free-variable approach.  So why doesn't
Metamath just use a more complex algorithm?  First, the simpler the
algorithm, the more confidence you can have in its robustness.  But more
importantly, implicit in a complex algorithm is the associated
difficulty in learning how the algorithm works.  I believe that it is
important to understand how the algorithm works if you "really" want to
understand how each proof step was arrived at and not just to accept it
on faith.  In beginning logic courses, the complex rules for free
variables and proper substitution take a significant effort to learn,
whereas Metamath's substitution rule can presumably be learned in a few
minutes (although I have no data on this).  For me, it was a
philosophical goal to make the math as transparent as possible by using
the simplest possible algorithm, and the $d method accomplished that (to
my mind).  Or even better, in principle (although awkward in practice),
the simplest algorithm would use only axioms that avoid $d's entirely:
http://us2.metamath.org:8888/mpegif/mmzfcnd.html

A problem with the explicit-free-variable approach is that it cannot
represent set variables that don't have to be distinct, such as in
x = y -> y = x and A. x A. y ph <-> A. y A. x ph.  To accomodate this,
we could have a notation representing the opposite of a $d that says two
variables are not necessarily distinct.  Alternately, we could adopt the
approach of restating the axiom system so that set variables are always
distinct, as described on
http://planetx.cc.vt.edu/AsteroidMeta//mmj2Feedback (search for "Here is
an idea").  However, in set theory and beyond, situations where set
variables are not required to be distinct are not very common.


(10-May-06) A reader comments at http://www.jaredwoodard.com/blog/?p=5
that he wishes I'd spend less time on Hilbert space and more on cleaning
up *OLDs.

The cleaning up of *OLDs has actually been happening "behind the scenes"
even though people may not notice it.  Almost 200 *OLDs have been
eliminated since January (there were 380 then; now there are 185).
Yesterday's geoser and expcnv will eliminate their corresponding *OLDs,
for example.  Mel O'Cat is also working on a list of 27 of the *OLDs.

I realize hardly anyone cares about the Hilbert space stuff, and regular
visitors are probably bored to tears seeing the dreary purple theorems
day after day.  :)  (I do try not to go too long without a pretty green
one now and then.)  My long term goal is to build a foundation that will
let me explore rigorously some new ideas I have for Hilbert lattice
equations that may lead to writing a new paper.  I also want to build up
a foundation for theorems related to quantum computing.  In a few days
hopefully we will have a theorem related to error growth in qubits
(quantum gates).


(9-May-06) Compare geoser to the old one, geoserOLD, that it replaces.
Wikipedia entry:  http://en.wikipedia.org/wiki/Geometric_series


(5-May-06) The Riesz representation theorem is used to justify the
existence and uniqueness of the adjoint of an operator.  In particular,
the rigorous justification of Dirac bra-ket notation in quantum
mechanics is dependent on this theorem.  See also Wikipedia:
http://en.wikipedia.org/wiki/Riesz_representation_theorem


(13-Apr-06) One thing to watch out for in the literature is how the
author defines "operator".  I put some notes at
http://us2.metamath.org:8888/mpegif/df-lnop.html on the various
definitions:  for some they are arbitrary mappings from H to H, for
others they are linear, for still others they are linear and bounded.
In set.mm, "operator" means an arbitrary mapping.


Today's goal is:  a linear operator is continuous iff it is bounded.
This will be called "lncnbd" when it is completed later today. lnopcon
provides the basic proof of this:  it is not a trivial proof, 220 steps
long, and to me it is non-intuitive.  Many authors forget about the case
of the trivial Hilbert space, where sometimes a result holds and other
times not.  lnopcon does hold, but we have to prove the trivial case
separately, and in step 219 we combine the nontrivial case of step 194
with trivial case of step 218.


(12-Apr-06) The astute observer may have noticed that the dates on
the "Recent Additions to the Metamath Proof Explorer" now have 4-digit
years, e.g. 12-Apr-2006 instead of 12-Apr-06.  Version 0.07.15 of
the metamath program implements 4-digit date stamps, and set.mm has
been updated with 4-digit years.  The program still recognizes 2-digit
years (for the 'write recent' command) but assumes they fall between
1993 and 2092.


(10-Apr-06) The reason the xrub proof is long is that it involves 9
cases:  3 cases for B (real, -oo, and +oo), and for each B, 3 cases for
x. (We handle some of them simultaneously in the proof.)  This theorem
means we only have to "scan" over the reals, rather than all of the
extended reals, in order to assert that B is an upper bound for set A.
This is true even if A contains non-reals or if B is non-real (+oo
or -oo).

When we quantify over all extended reals, often we have to consider the
3 cases of real, -oo, +oo separately.  The advantage of this theorem is
that we don't have to handle the last two cases anymore, so some proofs
will become significantly shorter as a result.


(6-Apr-06) The proof of unictb is very different from Enderton's, which
I found somewhat awkward to formalize.  Instead, it is effectively a
special case of Takeuti/Zaring's much more general uniimadom.

It is also interesting how simple it is to go from the indexed union to
the regular union version of a theorem, whereas it can be rather
difficult in the other direction.  For example, iunctb to unictb is
trivial through the use of uniiun.  But for the finite version of this
theorem, compare the difficulty of going from the regular union version,
unifi to iunfi, requiring the not-so-trivial fodomfi, which was
proved for this purpose.

The conversion of unifi to iunfi involved substituting z for x and
{ y | E. x e. A y = B } for A in unifi, using dfiun2 on the consequent,
and doing stuff to get the antecedents right.  The "doing stuff"
ends up being not so simple.

Perhaps if I had to do it over, it might have been simpler to prove
iunfi first, then trivially obtain unifi via uniiun, although I'm not
really sure.

Both iunctb and iunfi are intended ultimately to be used by a Metamath
development of topology, which Stefan Allan has started to look at.


(1-Apr-06) Today's avril1 http://us2.metamath.org:8888/mpegif/avril1.html
is a repeat of last year's, except for a small change in the
description.  But I bring it up again in order to reply to last year's
skeptics.

Unlike what some people have thought, there is nothing fake about this
theorem or its proof!  Yes, it does resemble an April Fool's prank, but
the mathematics behind it are perfectly rigorous and sound, as you can
verify for yourself if you wish.  It is very much a valid theorem of ZFC
set theory, even if some might debate its relative importance in the
overall scheme of things.  The only thing fake is that Prof. Lirpa uses a
pseudonym, since he or she wishes to remain anonymous.

Tarski really did prove that x=x in his 1965 paper.  While it is
possible he wasn't the first to do so, he did not attribute the theorem
to anyone else.

The theorem

  -. ( A P~ RR ( i ` 1 ) /\ F (/) ( 0 x. 1 ) )

importantly tells us we cannot prove, for example,

  ( A P~ RR ( i ` 1 ) /\ F (/) ( 0 x. 1 ) )

if ZFC is consistent.  If we utter the latter statement, that will
indeed be a hilarious joke (assuming ZFC is consistent) for anyone who
enjoys irony and contradiction!  But anyone who could prove the latter
statement would achieve instant notoriety by upsetting the very
foundation that essentially all of mathematics is built on, causing it
to collapse like a house of cards, into a pile of (Cantor's) dust that
would blow away in the wind.  That assumes, of course, that the paradox
is not hushed by the established mathematical community, whose very
livelihoods would be at stake.  In that case, the discoverer might
achieve wealth instead of fame.

So, in effect the theorem, being preceded by the "not" sign, really
tells us:  "I am _not_ an April Fool's joke."  Thus we are reminded of
the Liar Paradox, "this sentence is not true," but with an important
difference:  paradoxically, avril1 is not a paradox.

For those whose Latin is rusty, "quidquid germanus dictum sit, altum
viditur" means "anything in German sounds profound."  Just as logicians
have done with Latin ("modus ponens" and so on), set theorists have
chosen German as their primary obfuscating language.  For example, set
theory texts lend great importance and mystery to the otherwise trivial
subset axiom by calling it "Aussonderung."  This helps keep the number
of set theorists at a comfortable level by scaring away all but a few
newcomers, just enough to replace those retiring.

To derive avril1, we have used an interdisciplinary approach that
combines concepts that are ordinarily considered to be unrelated.  We
have also used various definitions outside of their normal domains.
This is called "thinking outside of the box."  For example, the
imaginary constant i is certainly not a function.  But the definition of
a function value, df-fv, allows us to substitute any legal class
expression for its class variable F, and i is a legal class expression.
Therefore ( i ` 1 ) is also a legal class expression, and in fact it can
be shown to be equal the empty set, which is the value of "meaningless"
instances of df-fv, as shown for example by theorem ndmfv.

http://us2.metamath.org:8888/mpegif/df-fv.html
http://us2.metamath.org:8888/mpegif/ndmfv.html

Now that the technique has been revealed, I hope that next year someone
else will make a contribution.  You have a year to work on it.


(28-Mar-06) sspr eliminates the hypotheses of the older version, which
has been renamed to ssprOLD.


(27-Mar-06) As of today's version of set.mm, 183 out of the 315 theorems
with names ending "OLD" were removed, so there are only 132 *OLDs left.
This has made set.mm about 300KB smaller.  (The 132 remaining can't just
be deleted, since they currently are referenced by other proofs, which
will have to be revised to eliminate their references.  Mel O'Cat has
started working on some of them.)


(20-Mar-06) Stefan has done the "impossible," which is find an even
shorter proof of id.  (See the note of 18-Oct-05 below.)  His new proof
strictly meets the criteria I use for accepting shorter proofs
(described at the top of the set.mm file).  He writes, "Too bad you
don't get a special prize for shortening this one!"  I agree; any
suggestions?

About a1d, he writes, "[a1d] is not a shorter proof in compressed
format, and is in fact the same size as the old one.  However it has
fewer steps if expanded out into axioms, so you might want to include it
in set.mm anyway."


(24-Feb-06) Stefan's sylcom proof has 1 fewer character in set.mm than
the previous, and 9 fewer characters in its HTML file.  I think we may
be approaching the theoretical limit.  :)


(22-Feb-06) The proof of efcj uses some obsolete theorems with the old
convergence ~~>OLD, but I don't have the updated ones ready yet and just
wanted to get efcj out of the way since we will need it for more
trignometry.  Eventually the proof of efcj will be updated.  Note that
"obsolete" doesn't mean "unsound"; the proof is perfectly rigorous.  The
purpose of the new notation is to make proofs more efficient (shorter)
once everything is updated with it.

(17-Feb-06) efadd was completed a little sooner than I thought.  Here
are some statistics:  the set.mm proof (efadd + 28 lemmas) takes 47KB
(out of 4.5MB for set.mm).  The HTML pages take 4.7MB (out of 372MB
total for mpegif).

(13-Feb-06) Over the next couple of weeks, we will be proving what has
turned out to be a difficult theorem - the sum of exponents law for the
exponential function with complex arguments, i.e. e^(a+b)=e^a.e^b.  Even
though textbook proofs can seem relatively short, the ones I've seen
gloss over many of the tedious details.  After several false starts I
came up with a proof using explicit partial sums of product series and
explicit comparisons for the factorial growth (we will use the recent
fsum0diag and faclbnd4 for this).  The whole proof will have around
30 lemmas.

(12-Feb-06) nonbool demonstrates that the Hilbert lattice is
non-Boolean.  This proves that quantum logic is not classical.  Of
course this is well known, but I've only seen it stated without proof,
so I came up with a formal demonstration.  It seems the dimension
must be at least 2 to demonstrate non-Boolean behavior.

Note that we have already shown that it is orthomodular (in pjoml4),
but Boolean is a special case of orthomodular, so that in itself doesn't
demonstrate that quantum logic is not classical.

(11-Feb-06) Even though the climmul proof is long, I'm not unhappy about
it, since Gleason dedicates over 2 pages to the proof (although some of
it is an informal discussion of how one goes about coming up with such a
proof).

While our proof roughly follows Gleason, our trick of constructing a new
positive number less than both A and 1, given a positive number A, is
not in Gleason - he uses the infimum of A and 1 (bottom of page 170),
which would be more awkward (for us at least) to deal with.  This trick
is proved by recrecltt and is used in climmullem5.

(9-Feb-06) I found a slightly shorter equivalent for ax-groth expanded to
primitives.  The idea was to use fun11 at step 42, so that the old steps
42-60 become 42-48.  But the result was a little disappointing.  I had
higher hopes for the idea but it only ended up removing one binary
connective.  At least the proof is 59 instead of 71 steps.  (The old one
has been kept temporarily as grothprimOLD.)  Probably the biggest problem
is the repeated use of grothlem (4 times) to expand binary relations.
I wonder if there is a shorter way to effectively express this concept.

(8-Feb-06) dummylink was added for a project to interface O'Cat's mmj2
Proof Assistant GUI with the metamath program's Proof Assistant, but
I've discovered that it can be quite handy on its own as suggested by
its description.  For more background see "Combining PA GUI and CLI - an
interim solution?" at the bottom of web page
http://planetx.cc.vt.edu/AsteroidMeta/mmj2ProofAssistantFeedback

(Downloaders - the Metamath download containing this proof will be in
tomorrow's download.  In general, the Most Recent Proofs usually take
about a day to propagate to the downloads.)

(4-Feb-06) More shorter proofs by O'Cat - pm2.43d, pm2.43a.

rcla4cv ended up shortening 26 proofs by combining rcla4v and com12.
The result was a net reduction in the database size, even after
accounting for the new space taken by rcla4cv.

(3-Feb-06) Mel O'Cat found shorter proofs for sylcom, syl5d, and syl6d
while having fun with his new toy, the Proof Assistant GUI.

Note:  the new proofs of of syl5d and syl6d have the same number of
logical steps, but proofs are shorter if we include the unseen
wff-building steps.  Out of curiosity I restored the original syl5d
proof, since it had already been shortened by Josh Purinton, and called
it syl5OLDOLD.  Here are the complete proofs for the syl5d versions:

syl5d:  14 steps

  wph wps wta wch wth wph wta wch wi wps syl5d.2 a1d syl5d.1 syldd $.

syl5dOLD:  16 steps

  wph wps wch wth wi wta wth wi syl5d.1 wph wta wch wth syl5d.2 imim1d
  syld $.

syl5dOLDOLD:  19 steps

  wph wta wps wth wph wta wch wps wth wi syl5d.2 wph wps wch wth syl5d.1
  com23 syld com23 $.


(30-Jan-06) Today we start a brand new proof of the binomial theorem
that will be much shorter than the old one.  It should also be much more
readable.  This is what the new one will look like (A e. CC, B e. CC):

( N e. NN0 -> ( ( A + B ) ^ N ) = sum_ k e. ( 0 ... N )
         ( ( N C. k ) x. ( ( A ^ ( N - k ) ) x. ( B ^ k ) ) ) )

Compare to the old, binomOLD:

( ( N e. NN0 /\ A. k e. ( 0 ... N ) ( F ` k ) =
               ( ( N C. k ) x. ( ( A ^ k ) x. ( B ^ ( N - k ) ) ) ) ) ->
                   ( ( A + B ) ^ N ) = ( <. 0 , N >. sumOLD F ) )

(24-Jan-06) (Compare note of 22-Oct-05.)  Per the request of Mel O'Cat,
I eliminated all connective overloading in set.mm by making weq, wel,
and wsb "theorems" so that he can use set.mm with his GUI Proof
Assistant.  This involved moving set theory's wceq, wcel, and wsbc
up into the predicate calculus section, which is somewhat confusing,
so I added extensive commenting to explain it hopefully.

Note that the web page "proofs" of weq, wel, and wsb have only one step:
this is because they are syntax proofs, and all syntax building steps
are suppressed by the web-page generation algorithm, which doesn't
distinguish weq, etc. from "real" theorems.  I'm not yet sure if it's
worth changing the algorithm for this special case.  To see the actual
steps, in the Metamath program type "show proof weq /all".

(20-Jan-06) supxrcl shows the usefulness of the extended reals:  the
supremum of any subset always exists.  Compare the non-extended real
version suprcl, which has a complicated antecedent that must be
satisfied.

(19-Jan-06) A new set theory axiom, ax-groth, was added to the database.
This axiom is used by Mizar http://mizar.org to do category theory (that
ZFC alone cannot do), and I think it is appropriate to add it to set.mm.
One of the negative aspects of this axiom (aesthetically speaking) is
that it is "non-elementary" and very ugly when expanded to primitives,
unlike the ZFC axioms.  I worked out grothprim because I was curious to
see what it actually looks like.  I don't think grothprim will actually
be used for anything since it is impractical; instead, ax-groth would be
the starting point.  However, grothprim can provide a benchmark for
anyone seeking a shorter version.  There may be a theoretical reason why
it can't be as short as say ax-ac, but I don't think anyone knows what
the shortest possible equivalent is.

mmset.html has also been updated to include ax-groth below the ZFC
axioms.

I wrote to Jeff Hankins:

  I added ax-groth partly in response to your email on Mycielski's ST set
  theory,* although it's been on my backburner for a while.  In my mind,
  ax-groth "completes" set theory for all practical purposes.  (The Mizar
  people, who use this axiom, also think so.)  Unlike the controversial
  assertions of ST, ax-groth is relatively mild and uncontroversial - I
  don't know of any debate over it, unlike the debate on the Continuum
  Hypothesis.  I am pretty sure that Mycielski's Axiom SC implies ax-groth
  from his comments, although I haven't worked out a proof.  So
  ZFC+ax-groth is most likely a subset of ST.

* http://www.ams.org/notices/200602/fea-mycielski.pdf - free AMS
sign-up required to view article

(12-Jan-06) The exponential function definition df-ef is new.
Yesterday's version of df-ef was reproved as theorem dfefOLD that
will eventually be deleted after the old infinite summation notation is
phased out.

Compare the new exponential function value formula, efvalt, with the old
one, efvaltOLD.  Don't you agree that it is much easier to read?  This
kind of thing makes me believe that the effort to introduce the new
summation notation was worthwhile.  :)  In addition, will have a much
nicer version of the binomial theorem (whose old version has already
been renamed to binomOLD), with a much shorter proof - stay tuned!

(11-Jan-06) isumvalOLDnew links the old and new definitions of infinite
sum, allowing us to temporarily reuse theorems in the old notation until
they are phased out.  See comments below of 1-Nov-05, 2-Nov-05,
20-Dec-05, and 21-Dec-05 regarding the new finite/infinite sum notation
df-sum.

The present definition of the exponential function, df-ef, makes use of
the obsolete infinite sum notation. dfefOLDnew will replace df-ef in the
next few days and become its new official definition.  The old df-ef
will become a (temporary) theorem that will be used to support the old
infinite sum notation until it is phased out.

gch-kn was updated with new hyperlinks.

(10-Jan-06) Regarding the 9-Jan-06 item in "Also new", the primary
reason I added the "/except" switch to "minimize_with" was to exclude
3syl.  While 3syl may shorten the uncompressed normal proof, it often
makes compressed proofs grow longer.  This happens when the intermediate
result of two chained syl's is used more than once.  When the
intermediate result disappears due to 3syl, two formerly common
subproofs have to be repeated separately in the compressed proof - part
of the compression is achieved by not repeating common subproofs.  So,
typically I exclude 3syl then minimize with it separately to see if the
compressed proof shortens or lengthens.  Maybe I'll add an option to
also check the compressed proof length instead of or in addition to the
normal proof length, but the "/exclude" was easier to program, and
curiously 3syl is the only problematic theorem I'm aware of.

(9-Jan-06) climOLDnew is a temporary theorem that links the old and new
limit relations, ~~>OLD and ~~>.  This will let us "jump ahead" and work
on exponentiation, etc. with the new notation before cleaning up all the
~~>OLD stuff (which I will still clean up eventually).  The linkage is
needed to avoid any gaps in the released set.mm.  (The metamath command
"verify proof *" should always pass for the daily releases of set.mm,
ensuring absolute correctness of its contents - even the stuff that's
obsolete.)

The main difference between ~~>OLD and ~~> is that ~~>OLD has the rigid
constraint that the sequence F be a function from NN to CC, whereas ~~>
allows F to be any set with lots of irrelevant garbage in it as long as
it eventually has function values in CC beyond some arbitrary point.
This can make ~~> much more flexible and easier to use.

The uppercase "OLD" in climOLDnew means the theorem will go away; for my
cleanup I will be phasing out and deleting all theorems matching *OLD*.
Currently there are 380 *OLD* theorems due to be phased out.  They can
be enumerated by counting the lines of output of the metamath command
"show label *OLD*/linear".

(6-Jan-06) syl*el* are all possible combinations of syl5,syl6 analogs
for membership and equality.  I added them to shorten many proofs, since
these patterns occur frequently.  (Since 1999 I've added the syl*br*
versions of these for binary relations and found them useful, so I
decided to add the syl*el* versions.)  There is a curious asymmetry in
which ones ended up being useful:  syl5eqel got used over 30 times,
whereas syl5eleq isn't used at all so far.  I don't know if this is
because I wrote the original proofs with certain patterns subconsciously
repeated, or if there is something more fundamental.

(3-Jan-06) r19.21aiva adds 319 bytes to the database, but it reduces the
size of about 50 (compressed) proofs by 765 bytes total, for a net
reduction in database size of 466 bytes.

(21-Dec-05) All theorems that involved df-fsum have been updated to use
the dual-purpose (finite and infinite) df-sum instead.  So, we now have:

       Definition               Token                Symbol
  Yesterday   Today       Yesterday Today     Yesterday   Today
  df-sum      df-sum      sum_NEW   sum_      \Sigma_NEW  \Sigma
  df-fsum     df-fsum     sum_      sum_OLD   \Sigma      \Sigma_OLD
  df-fsumOLD  df-fsumOLD  sumOLD    sumOLD    \Sigma_OLD  \Sigma_OLDOLD

The names with "OLD" are now kind of oddly inconsistent, but everything
with "OLD" in it (whether label or token) will eventually be deleted so
it doesn't really matter.

(20-Dec-05) The new finite sum stuff looks like it will be very useful,
and we will need an infinite sum version to replace the current df-isum.
Rather than repeat the whole development with new equality, bound
variable, etc. utility theorems, I decided to combine the two
definitions.  The new combined definition is called df-sum, which is
basically the union of two definitions.  The index range (finite or
infinite) determines whether the sum is finite or infinite.  See the
comments in df-sum.  We need about half a dozen utility theorems.  Then,
after changing the "sigmaNEW" to "sigma", we can "plug in" the new
definition and re-use the theorems we have already developed for finite
sums without further modification.

fzneuzt is the basic theorem that lets us distinguish the finite
half vs. the infinite half of df-sum.

(14-Dec-05) fsum1sNEW (to be renamed fsum1s) exploits class2set to
eliminate the hypothesis of fsum1slem, so that we require only existence
of A(M) as a member of some arbitrary class B, rather than requiring
that it be a complex number (as yesterday's fsum1s requires).  This will
shorten future proofs by allowing us to apply fsum1sNEW directly when A
is a real, rational, etc.  I had almost forgotten about class2set, which
I think is a neat trick.  Yesterday's fsum1s will be renamed to
fsum1sOLD and eventually deleted.

I'm not sure if fsum1s2 will be useful, but it lets us show off an
application of the interesting fz1sbct.

(13-Dec-05) fsum1slem shows an example of a use for the new
substitution-for-a-class notation.  Compare it to the implicit
substitition version fsum1.

fsum1s turns the hypothesis A e. V of fsum1slem into an antecedent.
Since A is quantified, we have to work a little harder than usual to
accomplish this.

(4-Dec-05) See http://planetx.cc.vt.edu/AsteroidMeta//metamathMathQuestions
for comments on equsb3.

(30-Nov-05) csbnestg is the same as csbnestglem except that it has fewer
distinct variable restrictions.  Its proof provides another example of a
way to get rid of them; the key is using a dummy variable that is
eliminated with csbcog.  I think it is a neat theorem and was pleasantly
surprised that so few distinct variable restrictions were needed.  The
antecedents just say that A and B are sets and are easily eliminated in
most uses.  By having antecedents instead of A e. V, B e. V hypotheses,
we can make more general use of the theorem when A and B sethood is
conditioned on something else; hence the "g" after "csbnestg".

(23-Nov-05) In all versions of set.mm from 18-Nov-05 to 22-Nov-05 inclusive,
the line

  htmldef "QQ" as "<IMG SRC='bbieq.gif' WIDTH=13 HEIGHT=19 ALT='QQ' ALIGN=TOP>";

should be

  htmldef "QQ" as "<IMG SRC='bbq.gif' WIDTH=13 HEIGHT=19 ALT='QQ' ALIGN=TOP>";

Thanks to Jeff Hankins for pointing this out.


(18-Nov-05) sbccom is the same as sbccomlem except that it has fewer
distinct variable restrictions.  Its proof shows an example of how to
get rid of them when they are not needed.

-------

I made around 80 changes to the bixxx series names to be
consistent with earlier bixxx -> xxbix changes in prop. calc.  E.g.
biraldv was changed to ralbidv.

  r - restricted
  al - for all
  bi - biconditional
  d - deduction
  v - $d instead of bound-variable hypothesis

Also, bi(r)abxx were changed to (r)abbieqxx e.g. biabi was changed to
abbieqi.

  ab - class abstract (class builder)
  bi - hypothesis is biconditional
  eq - conclusion is equality
  i - inference

As usual, all changes are listed at the top of set.mm, and as instructed
there can be used to create a script to update databases using set.mm as
their base.  As always, better naming suggestions are welcome.

(17-Nov-05) abidhb is a very neat trick, I think!  It will let us do
things that the Weak Deduction Theorem by itself can't handle.  For its
first use, we create a "deduction" form of the bound-variable hypothesis
builder for function values, hbfvd - this is actually a closed form that
allows _all_ hypotheses to be eliminated, since 'ph' is arbitrary and
can be replaced with a conjunct of the hypotheses.  And the only thing
hbfvd needs is the "inference" version hbfv in step 5!  Before I thought
of abidhb, hbfvd was going to require a long chain of hbxxd's (hbimd,
hbabd,...) that would build up to the function value definition.  I was
actually starting to get depressed about the amount of work that would
have been needed.  But as they say, laziness is the mother of invention.
Now, we can just add hbxxd's as needed, starting from the hbxx
"inference" versions!

(15-Nov-05) Note that fsumeq2 requires a $d for A and k, whereas fsumeq1
doesn't.  On the other hand, we have analogously iuneq1 and iuneq2,
neither of which require the bound variable to be distinct!  I spent a
lot of time trying to get rid of it for fsumeq2 by changing the
definition df-fsum, but it always seemed that if I got rid of it in
fsumeq2 it would show up in fsumeq1.  So I don't know whether it is
theoretically possible to get rid of it.  In the current version of the
fsumeq2 proof, the $d is needed to satisfy resopab in steps 9 and 10.

Getting rid of $d A k in fsumeq2 would be advantageous if I add an
"explicit substitution" form of induction like (for ordinals) Raph
Levien's findes, where the hypothesis findes.2 has the substituted
variable free in the expression to be substituted.  So, if anyone can
solve this, let me know!

(14-Nov-05) Today we introduce a new definition, df-csbc, the proper
substitution of a class variable for a set into another class variable.
We use underlined brackets to prevent ambiguity with the wff version,
otherwise [ x / y ] A R B could mean either x e. { y | A R B } for the
df-sbc wff version or <. [ x / y ] A , B >. e. R for the df-csbc class
version.  So instead we use [_ x / y ]_ A for the class version.  One
reason I chose the underline is that it is easy to do in Unicode and
LaTeX, but if you have another idea for the notation let me know.  See
notes of 5-Nov-05 for other notes on the definition.

(13-Nov-05) I decided to make the new finite summation notation df-fsum
official.  The old has been renamed to df-fsumOLD.  I am uncertain about
whether to keep the old (under a different name yet to be determined) or
delete it eventually.  There are 61 theorems using it (21 of which are
the binomial theorem binom) which I hope to eventually re-prove with the
new notation.

(5-Nov-05) Regarding sbabex:  The notation "[ y / x ] ph" means "the
proper substitution of y for x in phi".  We do not have a separate
notation for the class version of this, so until such time (if it
becomes common enough to warrant a special notation), the idiom
"{ z | [ y / x ] z e. A }" means "the proper substitution of y for x in
class variable A".  In other words we turn the class into a wff - the
predicate "is in A" - then do the proper substitution, and finally turn
the result back into a class by collecting all sets with this predicate.
I think that's a neat trick, and it will become the definition if we
introduce a notation for it.  Note that the argument of "[ y / x ]" is
"z e. A", which is a wff.

(2-Nov-05) I have about a dozen theorems in progress with the current
'df-fsum' notation, that I might as well finish before switching to the
new notation.  These will provide reference proofs that will make the
corresponding versions in the new notation easier to prove, but they
will eventually be deleted (assuming I adopt the new notation, whose
definition I'm still fine tuning.)

Not all theorems will be shorter with the new notation, which is one
reason for my indecision.  For example:

Old: (fsumserz)

|- F e. V  =>  |- ( N e. ( ZZ> ` M ) ->
       ( <. M , N >. sum F ) = ( ( <. M , + >. seq F ) ` N ) )

New: (fsumserzNEW)

|- F e. V  =>  |- ( N e. ( ZZ> ` M ) ->
        sumNEW k e. ( M ... N ) ( F ` k ) = ( ( <. M , + >. seq F ) ` N ) )


(1-Nov-05) The proof of the binomial theorem painfully illustrates that
the current notation for summations is very awkward to work with,
particularly with nested summations.

A new definition I'm experimenting with is df-fsumNEW, which, unlike
df-fsum (which is a class constant with no arguments), has a dummy
variable k and two other arguments.  To indicate the sum A^1 + A^2 +...+
A^N, we can write

  sumNEW k e. ( 1 ... N ) ( A ^ k )

instead of the present

  ( <. 1 , N >. sum { <. k , y >. | ( k e. ZZ /\ y = ( A ^ k ) ) } )

(where usually the class builder is stated as a hypothesis like
F = { <. k , y >. | ( k e. ZZ /\ y = ( A ^ k ) ) } to keep the
web page proof size down).  Nested sums are even more awkward, as
the hypothesis "G =" in the binomial lemmas shows.

With the new notation, the binomial theorem would become:

  ( N e. NN0 -> ( ( A + B ) ^ N ) = sumNEW k e. ( 0 ... N )
          ( ( N C. k ) x. ( ( A ^ k ) x. ( B ^ ( N - k ) ) ) ) )

The price we pay is that 'sumNEW' is not just a set-theoretical class
constant like 'sum', but instead a symbol with arguments and a bound
variable, analogous to indexed union df-iun.  In particular, its
soundness verification, while still simple, is not as "trivial" as with
new class constants.  There is nothing wrong with this in principle,
but it is contrary to my simplicity goal of introducing only new class
constants for new definitions, thus keeping the number of "primitive"
syntactical structures to a bare minimum.  But in this case I think
practicality will win out.  The proofs should be more elegant with
'sumNEW' (later to be changed to 'sum' if I decide to keep it),
and I also think it is more readable.

Of course, soundness justification will not be an issue with the
eventual Ghilbert version.

To further elaborate on my simplicity preference (for which df-fsumNEW
will be an exception), below I reproduce my response to an email Josh
Purinton wrote me (on Oct. 18) regarding the notation for fzvalt.

  > Consider using square brackets for 'compatibility' with the
  > distinction between a closed and open interval.

My response:

  I understand what you are getting at, but there is a slight problem.
  df-fz is just the class symbol "..." which is used as an operation,
  and the parentheses are just the normal parentheses that surround an
  operation value.  Thus "( 1 ... 3 )" means "( ... ` <. 1 , 3 > )".

  I could define a new syntactical structure or pattern "[ A ... B ]" but
  then I couldn't use all the equality, hb*, etc. theorems we have for
  operation values.  After basic set theory development, which is more or
  less finished, I've been trying to introduce only new class constant symbols
  (with exceptions for a few very common things like the unary minus "-u";
  actually that is the only exception so far).  In addition to allowing us
  to reuse general-purpose theorems, the soundness justification is
  trivial for a new constant class symbol, which is what I like most about
  that approach.

  Also, "( m ... n )" is really more of a discrete, unordered list
  than an a continuous closed interval.

  I will probably never be completely happy with "..." in particular
  because it is nonstandard and unfamiliar, but on the other hand it has
  turned out to be very useful for theorems involving finite sums.  But I
  didn't consider it so important that it justifies its own new
  syntactical pattern.  It is so rare in the literature (if it ever
  occurs) that I was pleased to stumble across Gleason's partial version
  of the notion.

  For the four real intervals (x,y), (x,y], [x,y), [x,y] I haven't decided
  what to do yet.  It would be preferable to have them be just operation
  in the form of new class constant operation symbols, but I haven't
  thought of any good notation to accomodate them in this form.  We could
  have e.g.  "(.,.]" or "(]" so we'd have "( A (.,.] B )" or "( (.,.] ` <.
  A , B >. )" or "( A (] B )" etc. but these are odd-looking.  What I will
  end up doing is very open at this point.  Maybe it's time to start using
  words like "closed", "rclosed", "lclosed", "open", etc. in some way?

  Right now we have only the two workhorses "( F ` A )" and "( A F B )"
  for general function/operation values.  Analogously we have "A e. R" and
  "A R B" for predicates/binary relations.  They are the only general
  patterns the reader has to be familiar for virtually all new
  definitions.  In theory these are all that we need, although certain
  notations become very awkward (e.g. extending them to more arguments via
  ordered pairs, and the real intervals you have brought up).

Note that right now, we are using the "workhorse" ( A F B ) for
virtually all of the new definitions of sums, sequences, shifts,
sequential integer sets, etc.  I like it because there is only one
underlying notation, i.e. operation value, that you have to be aware of.
But I think the present df-fsum stretches the limit of what is
practical and "user-friendly".

--------------------

(24-Oct-05) Today we introduce the superior limit limsup, which will
be one of our principal uses of the extended reals.


(22-Oct-05) It appears I mispoke yesterday when I said "The new syntax
allows LALR parsing," and I changed it to "The new syntax moves us
closer to LALR parsability."  From Peter Backes:

  It makes it more LALR than before, but not completely. ;)

  What remains is

    1) set = set (trivial, since redundant)      [i.e. weq vs. wceq]
    2) set e. set (trivial, since redundant)     [i.e. wel vs. wcel]
    3) [ set / set ] (trivial, since redundant)  [i.e. wsb vs. wsbc]
    4) { <. set , set >. | wff } (we already discussed it and agreed it
    was not easy to solve)                       [i.e. copab]
    5) { <. <. set , set >. , set >. | wff } (ditto)   [i.e. copab2]

These are all easy to fix by brute force (eliminating weq, wel, and wsb,
and changing "{", "}" to "{.", "}." in copab and copab2) but I don't
want to be too hasty and am looking into whether there are "nicer" ways
to do this first.


(21-Oct-05) A big change (involving about 121 theorems) was put into the
database today:  the indexed union (ciun, df-iun) and indexed
intersection symbols (ciin, df-iin) are now underlined to distinguish
them from ordinary union (cuni, df-uni) and intersection (cint, df-int).
Although the old syntax was unambiguous, it did not allow for LALR
parsing of the syntax constructions in set.mm, and the proof that it was
unambiguous was tricky.  The new syntax moves us closer to LALR
parsability.  Hopefully it improves readability somewhat as well by
using a distinguished symbol.  Thanks to Peter Backes for suggesting
this change.

Originally I considered "..." under the symbol to vaguely suggest
"something goes here," i.e. the index range in the 2-dimensional
notation, but in the end I picked the underline for its simplicity (and
Peter prefered it over the dots).  Of course I am open to suggestion and
can still change it.  In the distant future, there may be
2-dimensional typesetting to display Metamath notation (probably
programmed by someone other than me), but for now it is an interesting
challenge to figure out the "most readable" 1-dimensional representation
of textbook notation, where linear symbol strings map 1-1 to the ASCII
database tokens.

iuniin is the same as before but has an expanded comment, and also
illustrates the new notation.

(18-Oct-05) Today we show a shorter proof of the venerable theorem id.
Compare the previous version at http://de2.metamath.org/mpegif/id.html .

fzvalt is the same as before but has an expanded comment.

(15-Oct-05) Definition df-le was changed to include the extended reals,
and df-le -> xrlenltt -> lenltt connects the new version to the existing
theorems about 'less than or equal to' on standard reals.

(14-Oct-05) The set of extended reals RR*, which includes +oo and -oo,
was added, with new definitions df-xr, df-pinf, df-minf, and df-ltxr.
The old < symbol was changed to <_RR, the new df-ltxr symbol was called
<, and the ordering axioms were reproved with the new < symbol (and they
remain the same, since in RR, < and <_RR are the same by ltxrlt.  This
allows us to use all remaining theorems about RR in the database
unchanged, since they are all restricted to elements of RR.  The
theorems proved today are the minimum necessary to retrofit the database
in this way.  I was pleasantly surprised at how easy it was to add in
the extended reals.

Unlike textbooks, that just say +oo and -oo are "new" distinguished
elements without saying what they are, we must state concretely what
they are in order to use them.  So I picked CC for +oo and { CC }
for -oo.  Many other choices are possible too.  The important thing
is not what elements are chosen for them, but how the new < ordering
is defined.

Unlike some analysis books, Gleason finds it unnecessary to extend the
arithmetic operations (only the ordering) for +oo and -oo, so I will be
avoiding that too unless a clear advantage becomes apparent.  E.g. some
books define +oo + A = +oo, A / +oo = 0, etc. but for us that is now
undefined.

(6-Oct-05) modal-b is analogous to the Brouwer modal logic axiom if we
map "forall x" to box ("necessarily") and "exists x" to diamond
("possibly").  See http://plato.stanford.edu/entries/logic-modal/ and
also http://www.cc.utah.edu/~nahaj/logic/structures/systems/s5.html

In fact, our axioms ax-4, ax-5, and ax-6 (plus ax-1/2/3, modus ponens,
and generalization) are *exactly* equivalent to modal logic S5 under
this mapping!  This was not intended when I first came up with the
axioms.  Our axioms have a different form because I arrived at them
independently when I didn't know what modal logic was, but they (or
Scott Fenton's ax46/ax-5) are provably equivalent to S5 and can be used
(under the mapping) as alternate axioms for S5.  Conversely, all the
theorems of S5 will automatically map to theorems of our "pure"
predicate calculus.

Axiom ax-7 has no modal logic analog, since it has two variables.
However, if we restrict x and y to be distinct, it might be possible to
make an analogy between it and the Barcan Formula BF (see the
plato.stanford.edu page), particularly because the BF converse is also
true (http://plato.stanford.edu/entries/actualism/ltrueCBF.html) as it
also is for ax-7.

(4-Oct-05) ser1f0 - The difficulty of proving this "obvious" fact was
surprising.

(30-Sep-05) df-isum is a new definition for the value of an infinite sum
that will eventually replace df-sumOLD.  Its advantage is that the sum
can start at any index N instead of the fixed index 1.  isumvalt is the
first use of the new definition.

The notation of df-isum perhaps isn't as nice as df-sumOLD, but I don't
know how else to do it since we are using a linear notation with no
subscripts.  (The infinity superscript is not a separate symbol but part
of the fixed infinite summation symbol, represented as "sumoo" in the
database.)

(27-Sep-05) The obsolete definitions df-seq0OLD and df-climOLDOLD, along
with all theorems dependent on them, have finally been purged from the
set.mm database, lightening it a bit.

dfseq0 is nice.  Perhaps I'll interchange df-seq0 and dfseq0 some day.

(19-Sep-05) Scott Fenton found a shorter proof of ax46.

(18-Sep-05) It is becoming apparent that the recently introduced new
version of df-clim (now called df-climOLDOLD), although very useful
because of its arbitrary starting point, has some limitations:  since
there is no built-in requirement that the limit be a complex number or
that the sequence have complex values, additional conditions would have
to be stated for proving convergence that will make a lot of theorems
awkward.  Therefore I changed the definition to today's df-clim, called
the old ~~> as ~~>OLDOLD, and we will reprove most or all of the old
theorems then delete them.  The new definition df-clim is more complex
but it should be worth it in the end.

(There is still df-climOLD, which is severely limited to sequences
starting at 1, that must eventually be replaced with df-clim.  This will
be a longer-term project, since df-clim is directly or indirectly
affects around 500 theorems.  df-climOLDOLD, with its short-lived
existence, only affects around 20 theorems.)

(14-Sep-05) elfz4b shows the converse of elfz4t also holds - a nice
surprise.  Maybe I'll use it instead of elfz4t.

(11-Sep-05) Today we introduce an new definition, df-uz.  The idiom "set
of all integers greater than M" keeps recurring, so I decided to add a
notation for it to shorten proofs, even though it is nonstandard.  "ZZ
subscript >=" is a function that maps an integer to the set of all
integers greater than or equal to it.  I think I chose a good notation
for it that should be easy to remember; if you don't think so let me
know.

(8-Sep-05) fsumserz is an important theorem that expresses a finite sum
as a partial sum of a sequence builder.  In fact, it shows that the
finite sum notation is redundant, although we'll keep it because it
slightly more compact and seems like a more natural notation.

(7-Sep-05) A small change was made to df-fz to restrict its domain to
ZZ X ZZ, requiring a new version of fzvalt.  All other theorems remain
compatible, but the change allows us to state the useful elfz7t, where
(provided N is a set) we can deduce that M and N are in ZZ simply from
the fact that ( M ...  N ) has a member.  This will allow us to simplify
proofs by not requiring M e. ZZ and N e. ZZ as additional hypotheses.
(The fact that N must be a set is an artifact of our operation value
definition.  I'm currently pondering changing the operation value
definition so that N would not be required to be a set in elfz7t, but
that would be a huge change throughout the db - perhaps in the long term
future.)

seq0seqz and seq1seqz are yesterday's promised special cases of "seq".

(6-Sep-05) The old symbol "seq" for a 1-based infinite sequence builder
has been changed to "seq1" (df-seq1) for consistency with the 0-based
version "seq0".  The symbol "seq" (df-seqz) has been (re)defined to be
an infinite sequence builder with an arbitrary starting index, and we
will show, today or tomorrow, that "seq0" and "seq1" are special cases
of it.

(26-Aug-05) I didn't like the notation for finite sums so I decided to
do it all over again.  Everything in the last few days has been renamed
to *OLD.  These will be reproved with the new notation and the *OLDs
deleted.  (Also, I extended the definition so the value is zero if the
lower limit is greater than the upper limit, like some books do.)

So, instead of the (to me) awkward "F Sigma <M,N>" for

                ---   N
                \          F_i
                /
                ---  i = M

we now can state this as "Sigma`<<M,N>,F>", which seems more natural.
By df-opr this is equivalent to "<M,N> Sigma F", which results in
shorter proofs, so that's what I'll use for most proofs.  But that's
just a technicality that has nothing to do with the definition; anyone
can trivially reprove them with the "Sigma`<<M,N>,F>" notation if they
want.

(20-Aug-05) Many new or revised definitions today:

df-shft replaces df-shftOLD and df-shftOLDOLD - I extended it to all
    complex numbers, not just integers, for more flexible long-term use.
df-clim replaces df-climOLD - Convergence is now independent of the
    domain (NN, NN0, ZZ) of the underlying sequence - much nicer!
    In fact the input function can be garbage at the beginning, as
    long as there exists an integer somewhere beyond which it behaves.
df-seq0 replaces df-seq0OLD and df-seq0OLDOLD in order to use the new
    df-shft
df-fsum is the definition of a finite series summation.  I have mixed
    feelings about the notation (see fsumvalt comment), and comments are
    welcome.
df-plf is the addition of two functions.  I made it so it can apply to
    complex functions in general, not just sequences.  For sequences,
    we'll restrict the function sum to NN, etc. to strip out meaningless
    values.
df-muf multiplies a constant times a function, again for complex functions
    in general.

Slowly the obsolete *OLD versions will be replaced and eventually
deleted.  Yesterday's shftcan1t, etc. are already obsolete!

The lesson learned from the multiple versions of df-shft was that it
seems more useful to keep the definitions simple and as general as
possible.  Individual theorems can impose the necessary restrictions as
needed, rather than having the restrictions "hard-coded" into the
definition.  For example, df-clim is now dramatically more useful by
not restricting the domain of the underlying sequence to NN.

-------

I am thinking about a general 'seq' recursive sequence generator with an
arbitrary starting point.  The present 'seq' would be renamed to 'seq1'.
What would be nice would be:

    ( + seq0 F ) = ( + ( seq ` 0 ) F )
    ( + seq1 F ) = ( + ( seq ` 1 ) F )  etc.

Unfortunately seq0 and seq1 are proper classes and can't be produced as
function values, but restricting them to be sets would limit their
usefulness.  On the other hand, defining seq so that

    ( + seq0 F ) = ( < + , 0 > seq F )

or

    ( + seq0 F ) = ( + seq < F , 0 > )

or

    ( + seq0 F ) = ( < + , F > seq 0 )

etc. can be made to work without a restriction but none of the 12
possibilites seem natural to me.  What do you think?

(17-Aug-05) cvgratlem1,2,3 will replace the longer old versions (renamed
*OLD) in a re-proof of cvgrat that I have planned.

(5-Aug-05) The old definitions of the shift and seq0 operations have
been SCRAPPED.  They have been renamed df-shftOLD and df-seq0OLD (to be
deleted eventually), and replaced by new ones df-shft and df-seq0.  All
of the old theorems are obsolete and have been renamed *OLD.  The old
symbols have been changed to prevent accidental re-use.

The new definitions will provide simpler and more general theorems.  For
example, seq01 and seq0p1 are now the exact analogs of seq1 and seqp1 -
compare seq01OLD and seq0p1OLD, which required an annoying functionality
hypothesis.

(31-Jul-05) Per a suggestion from Scott Fenton, I renamed the following
theorems:

  Old       New

  syl34d    imim12d
  syl4d     imim2d
  syl3d     imim1d
  syl34     imim112i
  syl4      imim2i
  syl3      imim1i
  syl2      imim2
  syl1      imim1

(27-Jul-05) I was finally able to find a shorter proof of uzind.
Veteran visitors to this site will recall the 3/4 megabyte proof
described on 18-Jun-04 in mmnotes2004.txt, then called zind, and
currently renamed to uzindOLD.

(11-Jul-05) Back to the drawing board...  I decided to change binary
coefficient df-bc so that it is now defined (as 0) outside of its
"standard" domain of 0 <_ k <_ n, as is often done in the literature.
With the old definition, I can now see that many proofs using it would
have been very awkward.  Accordingly, several proofs were changed to
accomodate the new definition (not shown on the 'most recent' page - I
usually do not re-date modified proofs) and today's new ones were added.

(6-Jul-05) peano2re, although it is trivial and seems silly, shortens a
dozen proofs and reduces the net size of the set.mm database file.

(5-Jul-05) peano5nn is a simplification of the previous version.  df-n
was also simplified.

(28-Jun-05) pm4.83 finally completes the entire collection of the 193
propositional calculus theorems in Principia Mathematica.  This had been
done before for the Metamath Solitaire applet in
http://us2.metamath.org:8888/mmsolitaire/pmproofs.txt - but the set.mm
proofs are hierarchically structured to be short, indeed as short as I
(or Roy Longton for some of them) could find.

An ordered index of these can be found on the xref file
http://us2.metamath.org:8888/mpegif/mmbiblio.html in the
[WhiteheadRussell] entries.

(26-Jun-05) Yesterday's reuunixfr probably ranks among the most cryptic
in the database.  :)  Today's reuunineg shows an application of it that
is much easier to understand, with most of the confusing hypotheses of
reuunixfr eliminated.

(21-Jun-05) rabxfr lets us conclude things like the following:
(z e. RR -> (z e. {x e. RR | x < 1} <-> -z e. {y e. RR | -y < 1})).
The first two hypothesis just specify that y mustn't be free in B and C
(a less strict requirement than distinct variable groups y,B and y,C).

pm4.42 is Roy Longton's first Metamath proof.

(20-Jun-05) shftnnfn and shftnnval show the example of NN shifted to NN0
described in df-shft.  Hopefully these two theorems make it clear, in
a simple and intuitive way, what the 'shift' operation does.

(19-Jun-05) df-shft is a new definition; see its comment for an
explanation of the sequence shift operation.  In general I dislike
introducing a made-up explicit notation for a concept that exists in the
literature only implicitly in informal proofs, and I try to avoid it
when possible, because the notation will be completely unfamiliar even
to mathematicians.  But in the case of df-shft, after careful
consideration I felt the benefits will outweigh this disadvantage.  Once
the initial complexity is overcome with some basic lemmas, it is a
relatively simple concept to understand intuitively.

(18-Jun-05) We will start using j,k,m,n for integer set variables and
J,K,M,N for integer class variables.  I hope this will improve
readability a little.  Over time I will retrofit old theorems.  This
will be a major change involving hundreds of theorems, so if you have
comments on this let me know.

In the retrofitted proof of bcvalt, you can see the effect of this
change.

(17-Jun-05) imret provides us with a simpler way to define the imaginary
part, compared to df-im.  I may do that eventually.

(11-Jun-05) I finally caved in and revised df-exp so that 0^0=1 (as
can be seen with expnn00) instead of being undefined.  I have decided
that otherwise, some future things I have in mind are just going to be
too awkward.

Raph Levien came up with the original df-exp, where 0^0=1.  But he's a
computer scientist.  From a more purist perspective, I felt it was an
"inelegant patch," as it has been called, and I changed his definition
to exclude it.  For the past year we've trodded along merrily without
it.  But I'm starting to see that 0^0=1 will lead to simpler proofs and
statements of theorems in many cases.  So now we have 0^0=1 again.

Gleason's book defines 0^0=1 and uses it extensive, e.g. in the
definition of the exponential function (where we avoided it by breaking
out the 0th term outside of the infinite sum).

For a further discussion of this see:
http://www.faqs.org/faqs/sci-math-faq/specialnumbers/0to0/

Another reason I wanted to avoid defining 0^0 is that years ago on
Usenet, and probably still, there were endless arguments about it.  I
wanted to distance Metamath from that.  :)

(10-Jun-05) The choose function df-bc was added.  The literature
uses math italic capital C - but that conflicts with our purple C for
classes (when printed on a black-and-white printer).  So I decided to
use a Roman C.

bcvalt is somewhat awkward to prove because of its "Pascal triangle"
restricted domain instead of the full NN0 X. NN0.  Thus we have to use
oprabvali instead of the more efficient oprabval2.

(4-Jun-05) As far as I know, inf5 has never been published.  I think it
is neat.

pm2.13 seems like a rather silly variant of excluded middle (exmid).
What can I say - I'm just implementing what's in the book.

(2-Jun-05) efclt makes use of (in efcltlem1) the very important and
useful ratio test for convergence, cvgrat of 28-May-05, to show (in
efcltlem3) the convergence of the exponential function.  This in turn
lets us show that the value of the exponential function is a complex
number.  This will open a lot of doors with what we can do with the
exponential function.  Note that all of the confusing (or at least
unconventional) limit, seq, and infinite sum stuff have disappeared,
having served their purpose, and we're back into familiar territory.

Interestingly, the bounds we found earlier for Euler's constant e, in
ege2le3, didn't require all this work.  That is because e is a special
case of the exponential function that is much easier to work with.

(27-May-05) sercj tells us the the complex conjugate of each term in an
infinite series is the sum of the complex conjugates of the underlying
sequence.  We prove it by induction.  Recall that (+ seq F)`A means

                ---   A
                \         F_i
                /
                ---  i = 1

Theorem minusex just says that the negative of any class whatsoever
(even a proper class) is a set.  While this is not very meaningful when
the argument is not a complex number, it saves the effort of proving
that the argument is a complex number, making it trivial, for example,
to eliminate the hypothesis "A e. V" of yesterday's cvgcmp3cet.

(26-May-05) cvgcmp3cet is a pretty massive application of the Weak
Deduction Theorem http://us.metamath.org/mpegif/mmdeduction.html that
converts 8 hypotheses into antecedents.  A number of tricks were
employed to make the proof sizes manageable.  I didn't bother with the
final hypothesis, "A e. V", because it's trivial to eliminate with
vtoclg if needed (you don't need the Weak Deduction Theorem for that)
and in most cases A will exist anyway.

(25-May-05) The theorems expgt1t and oewordri have little to do with
each other.  There is an isomorphism between finite ordinal
exponentiation and exponentiation of the natural number subset of reals,
that could be exploited in principle, but they are independently
developed in our database.  A common root for both can be traced back to
ordinal multiplication (which is a starting point for the construction
of the reals), but from that point they diverge.  And when ordinals get
into the transfinite, the behavior of exponentiation becomes bizarrely
different, as we will see when (hopefully) I eventually get to it.

(24-May-05) Two of the hypotheses of cvgcmp3ce, cvgcmp3ce.4 and
cvgcmp3ce.7, are theoretically unnecessary.  However, eliminating them
is tedious (it involves dinkering around in the hidden regions of F and
G prior to index B; these regions were purposely left undefined to make
the theorem more general) and for most practical purposes unnecessary,
so I decided to leave the theorem "less than perfect," so to speak, at
least for now.

We could also, with only a few more steps (by changing y to a dummy
variable z and using cbvexv and mpbi at the end) eliminate the
requirement that x and y be distinct variables.  I may do this if it
ever becomes useful to do so.  In that case, the distinct variable group
"x,y,G" would split into "x,G" and "y,G".

The new rcla4 series swaps the antecedents.  I think this makes their
use more natural in a lot of cases.  However, I'm wondering if this was
a mistake:  rcla4v was used in around 90 theorems, and it took several
hours just to convert a couple dozen of the easiest ones.  In maybe 75%
of those cases the proof size was reduced, but in others it was
increased, and the hoped for net "payback" in terms of reduced database
size hardly seems worth it, if there will be a net reduction at all.
The old rcla4* versions were renamed with an "OLD" suffix, and I'll be
eliminating them over time (on dreary days when I'm feeling otherwise
uninspired).

By the way here is an informal breakdown of the cryptic name "rcla42v":

  'r' - uses restricted quantification (vs. "cla4*")
  'cl' - deals with substitution with a class variable
  'a4' - an analog to the specialization axiom ax-4 (and Axiom 4 in
         many systems of predicate calculus, which is stdpc4 in our
         database)
  '2' - deals with two quantifiers
  'v' - distinct variables eliminate the hypothesis that occurs in rcla4

(21-May-05) eqtr2t and eqtr3t were added because they shortened 16
proofs, with the net effect of reducing the total database size.

(20-May-05) odi is essentially the same proof as the 2/3 smaller nndi
for natural numbers, except that it uses transfinite induction instead
of finite induction.  So we have to prove not only the 0 and successor
cases but also the limit ordinal case.  But the limit ordinal case was a
monstrosity to prove, taking up 2/3 of the proof from steps 59 through
257.  Eventually I'll shorten nndi as a special case of odi.

(16-May-05) drex2 is part of a cleanup of some old lemmas.  The notable
feature of this theorem and others like it is that x, y, and z don't
have to be distinct from each other for the theorem to hold (normally, z
can't occur in the antecedent, as in for example biexdv).  The
"Distinctor Reduction Theorem" provides a way to trim off unnecessary
antecedents of the form (not)(forall x)(x=y), called "distinctors," in a
system of predicate calculus with no distinct variable restrictions at
all (which makes automated proof verification trivial, like for
propositional calculus).  (That system is the same as ours minus ax-16
and ax-17.  The paper shows that it is complete except for antecedents
of the form (not)(forall x)(x=y).  To translate its theorems to standard
predicate calculus, these antecedents are discarded and replaced with
restrictions of the form "x and y must be distinct variables.")

We can also translate distinctors to distinct variable pairs in the
logic itself (after ax-16 and ax-17 are added) by detaching them with
dtru.

The reverse can be done (distinct variable pairs to distinctors) by
using dvelim.  This comes in handy when a distinct variable restriction
is unnecessary, e.g. x and y in ralcom2; we convert the distinct variable
pair to a distinctor with dvelim then eliminate the distinctor with the
algorithm of the Distinctor Reduction Theorem.

(13-May-05) Thank goodness caucvg is out of the way...  The lemmas just
seemed to grow bigger and bigger as I scrambled to complete it and is
quite a mess towards the end.  When the proof author said "this
direction is much harder" he/she is not joking.  There is often much
hidden detail you end up discovering, that isn't apparent at first, when
you try to formalize a proof.  (For example, the very first stumbling
block was how to formalize "the set of numbers less than all values of F
except for finitely many".  Certainly "finitely" isn't to be taken
literally, i.e. strictly less equinumerous than omega, unless we want an
incredibly complex proof.)

It looks like I should eventually introduce an abbreviation for Cauchy
sequences, like I do for Hilbert space.  Then these proofs can be redone
with a somewhat simplified notation.  (That's easy to do, once you have
the proof.)

(12-May-05) For the caucvg proof, I am formalizing the proof found at
http://pirate.shu.edu/projects/reals/numseq/proofs/cauconv.html . I
couldn't find this proof in a textbook (most of those proofs use "lim
sup" instead).  If someone has a textbook reference for this particular
proof, it will be appreciated.

cruOLD has been phased out.

(11-May-05) cru generalizes the old version (now called cruOLD until it
is phased out) to include the converse.

(10-May-05) relimasn is a version of imasng that doesn't require that A
be a set (in the case where R is a relation, which is most of the time).
When A is not a set, the theorem isn't really meaningful - both sides of
the equality become the empty set - but relimasn allows us to prove more
general theorems overall.

(9-May-05) This morning a correspondent wrote me:

> Do you know of a rigorous formulation of Wang's single axiom schema for
> first order identity theory? I saw one in Copi's 'Symbolic Logic [Fourth
> Edition]' page 280, but I don't quite follow his notation nor do I see how
> to precisely state the stipulations for the variables in a "phi and psi"
> style axiom schema. And I didn't see it in your proof explorer as a theorem.

I added sb10f and answered:

I have the 5th edition, and I think you mean P6 of system RS_1 on p.
328.  (The 5th ed. added a chapter on set theory, which moved all the
later pages up, probably by 48 pages or so.  Copi was noted for killing
the used-book market by releasing new editions every few years.)

In the way it is stated, this axiom apparently has 2 errors.  First,
there are no restrictions (that I could find) stated for Fx and Fy.  Let
us suppose Fz is x=z.  Then Fx is x=x and Fy is x=y.  Putting these into
P6, we get:

   A. y (Fy <-> E. x (x=y /\ Fx))
   A. y (x=y <-> E. x (x=y /\ x=x))
   A. y (x=y <-> E. x (x=y))
   A. y (x=y <-> true)
   A. y (x=y)

The last line is false in a universe with 2 or more elements, so the
system is inconsistent.

The correction is to add a proviso that x must not occur free in Fy.

The second mistake is that there is no requirement (that I could find)
that x and y must be be distinct variables.  But if they are not
distinct, an inconsistent system results.

With these corrections, the proofs of the usual axioms on p. 329 still
go through.

The "A. y" prefix is redundant in P6, since it can be added by R2
(generalization).  In a logic that allows an empty universe, the A. y
would be needed, but on p. 319 it is stated that RS_1 is intended to be
true in a nonempty universe (and the rest of the axioms won't work in an
empty universe).  So, it seems like R6 is an afterthought tacked onto
the system.  Even the notation Fx and Fy is different from the Q that
represents the substitution instance of P in earlier axioms e.g. P5
p. 294.

I added what I thought was a close approximation to P6 (without the
redundant quantifier) here:

  http://us2.metamath.org:8888/mpegif/sb10f.html

The hypothesis specifies that x must not occur free in phi, and x and y
must be distinct, as must necessarily be the case.

Three other variants that are similar to P6 are:

  http://us2.metamath.org:8888/mpegif/sb5.html
  http://us2.metamath.org:8888/mpegif/sb5rf.html
  http://us2.metamath.org:8888/mpegif/equsex.html ,

the last one implicitly substituting y for x in phi to result in psi.

By the way, even though we can express a logical equivalent to P6 in
Metamath, this does not mean that it becomes the sole axiom replacing
the other equality/substitution axioms.  (It is possible that one or
more of the others could become redundant, but I haven't thought about
it too much.)  The reason is that in RS_1, substitution is done at the
metalogical level, outside of the primitive system.  In Metamath, we do
this "metalogic" at the primitive level of the system itself, and we use
additional axioms involving equality to accomplish this.  In many ways
substitution and equality are closely related, and the standard
formalization "hides" this by moving substitution outside of the axioms.

You might want to re-read these that explain this in more detail:

  http://us.metamath.org/mpegif/mmset.html#axiomnote
  http://us.metamath.org/mpegif/mmset.html#traditional


(8-May-05) While Euclid's classic proof that there are infinitely many
primes is easy to understand intuitively, I found the proof used by
infpnlem1 and infpnlem2 simpler to formalize.  (Specifically, this proof
avoids the product of a hypothetical finite set of all primes, which I
found cumbersome to formalize.)

Here is the proof:

  For any number n, the smallest divisor (greater than 1) of n!+1 is a
  prime greater than n.  Hence there is no largest prime.

Or, in explicit detail:

  Suppose there are a finite number of primes.  Let p be the largest.  Let
  q be the smallest divisor (greater than 1) of p!+1.  (The set of
  divisors of p!+1 is nonempty, since p!+1 is one of them, so by the
  well-ordering principle there is a smallest, which we will call q.)  Then
  none of 2,3,4,...,p divides q since otherwise it would also divide
  p!+1, which is impossible.  (2,3,4,...,p all divide p!, so none divides
  p!+1.)  And none of p+1,...,q-1 divides q since otherwise it would also
  divide p!+1, and then q would not be the smallest divisor of p!+1.
  Therefore q is prime, and q > p, so p is not the largest prime.

(6-May-05) funcnvres2 is a tongue-twister, or perhaps a brain-twister...

I reproved cvgcmp as a special case of cvgcmp2.  However, I have
(temporarily?) left in the original proof and called it cvgcmpALT, as I
think it might be instructive.  Comparing cvgcmpALT to
cvgcmp2lem+cvgcmp2, i.e. 33 vs. 68+17=85 steps, you can see how much
extra work was needed just to ignore up to the Bth term in cvgcmp2.

(5-May-05) cvgcmp2c is useful because it allows the test function
to be much larger (via a large C) than the reference function, yet
still converge.

(4-May-05) divclt, divrect, redivclt are slight modifications of their
older versions, which have been renamed divcltOLD, divrectOLD,
redivcltOLD and which will disappear when they are phased out over time

(3-May-05) prodgt0t also works for A=0, not just A strictly greater than
0.  This makes the theorem more general - something I like to do when I
can - but requires more work.  In prodgt0 (from which prodgt0t is
derived with dedth2h) you can see the separate derivation that we need
for the A=0  case.

(2-May-05) reccl* and rereccl* shorten many proofs (by replacing explicit
division closure that was used before) - e.g. I shortened 18 proofs
with rereccl.  So even though these are trivial they were worth adding.

(1-May-05) cvgcmp2 will be used to build a general-purpose comparison
test for complex number sequences.  cvgcmp2 tests for convergence of a
nonnegative real infinite series (+ seq G) (which normally will be a
series of absolute values), which is compared to a known convergent
series (+ seq F).  This version of cvgcmp allows us to ignore the
initial segment up to the Bth term; this was a tricky thing to do.  To
achieve this I compare G to an auxilliary sequence H (see cvgcmp2lem)
instead of F; H adds the supremum of the initial segment of G to F, so
it is guaranteed to be bigger than G everywhere including the initial
segment.

Originally I planned to use climshift of 24-Apr and sertrunc of 27-Apr
to achieve this (the ignoring up to the Bth term); now it looks like
they are no longer needed.  Too bad; they were a lot of work.  Perhaps
I'll leave them in in case a use for them shows up in the future.

In cvgcmp2 we show the actual value it converges to (i.e. the sup)
rather than just existence.  This will allow us to use hypotheses
instead of antecedents, which will make some proofs smaller.  For our
final theorem we will eliminate the hypotheses with the Weak Deduction
Theorem dedth then produce a simpler-to-state existence version.

(24-Apr-05) 2eu8 is a neat little discovery that I doubt has ever been
published.  It is fun seeing what you can do with the E! quantifier.
Hardly anything about it exists in the literature, and apparently double
E! has never even been published correctly; see 2eu5.  Exercise:  Can
you change E!x E!y to E!y E!x in either side of 2eu8 by using 2eu7 as
suggested?  (Hint:  use ancom.)  Note that E!x E!y doesn't commute
generally, unlike Ex Ey; probably not too many people know that.
Another interesting thing about 2eu7 and 2eu8:  x and y don't have to be
distinct variables.

(23-Apr-05) climshift shows that we can ignore the initial segment of a
sequence when computing the limit.  This is intuitively obvious (since
it's only what happens towards infinity that counts) but is a little
tedious to prove.

(22-Apr-05) It is curious that max1 doesn't require B to be a real
number.

(21-Apr-05) In steps 1-19 of climre, you may wonder why we have extra
steps using vtoclga to switch from variable x to variable w, when in
fact variable x could have been used throughout up to step 19.  The
answer is that by using w's, we can reuse step 14 at step 43, without
having to repeat its work.  This is a little trick that shortens the
compressed proof and the web page.  (The uncompressed proof, on the
other hand, is lengthened because it does not reuse previous steps, but
normally set.mm is stored with compressed proofs.)

(20-Apr-05) For serabsdif, note that (+ seq F)`n - (+ seq F)`m is the
sum from m+1 to n of the terms of F, i.e.
F`(m+1) + F`(m+2) + ... + F`n.  So even though our notation for series
(+ seq F) is limited for notational simplicity to start at the fixed
lower index of 1, we can represent a more general lower limit using
this method.  (A more general notation for series may be introduced in
the future.)

(19-Apr-05) We're going to be using (+ seq F) a lot, so here's a
refresher, since this notation is not standard.  We are using a special
case of our general-purpose "seq" operation, and there seems to be no
standard notation for (+ seq F) in the literature other than the
informal "a_1 + a_2 + ...  + a_n" which is not suitable for formal math.
(Gleason uses "F-" in front of an infinite sum to indicate the partial
sum function underlying the infinite sum, but it is not standard.)  If
you are following these theorems it might be useful to keep the
following note in mind.  It is straightforward if you understand the
correspondence to the conventional notation.

(+ seq F) is the sequence of partial summations in an infinite
series.  E.g. for a sequence of squares:

  argument sequence   partial sum of series

   n         F`n     (+ seq F)`n
   1         1          1
   2         4          5
   3         9         14
   4        16         30
          ...

Of course this series diverges, so the infinite sum doesn't exist, but
all partial summations exist as shown a