File: rainbow.1

package info (click to toggle)
bow 20020213-8
  • links: PTS
  • area: main
  • in suites: sarge
  • size: 2,596 kB
  • ctags: 2,871
  • sloc: ansic: 36,321; lisp: 1,072; cpp: 969; makefile: 569; perl: 495; sh: 101
file content (889 lines) | stat: -rw-r--r-- 26,999 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
.\" DO NOT MODIFY THIS FILE!  It was generated by help2man 1.27.
.TH RAINBOW "1" "November 2002" "rainbow 0.2" "User Commands"
.SH NAME
rainbow \- document classification front-end to libbow
.SH SYNOPSIS
.B rainbow
[\fIOPTION\fR...] [\fIARG\fR...]
.SH DESCRIPTION
Rainbow is a C program that performs document classification using
one of several different methods, including naive Bayes, TFIDF/Rocchio,
K-nearest neighbor, Maximum Entropy, Support Vector Machines, Fuhr's
Probabilitistic Indexing, and a simple-minded form a shrinkage with
naive Bayes.
.PP
.B Rainbow
is a standalone program that does document classification.
Here are some examples:
.IP
.B rainbow \-i ./training/positive ./training/negative
.PP
Using the text files found under the directories `./positive' and
`./negative', tokenize, build word vectors, and write the
resulting data structures to disk.
.IP
.B  rainbow \-\-query=./testing/254
.PP
Tokenize the text document `./testing/254', and classify it,
producing output like:
.IP
.B /home/mccallum/training/positive 0.72
.B /home/mccallum/training/negative 0.28
.PP
.IP
.B rainbow \-\-test\-set=0.5 -t 5
.PP
Perform 5 trials, each consisting of a new random test/train split
and outputs of the classification of the test documents.
.SH OPTIONS
.IP
Testing documents that are specified on the command line:
.TP
\fB\-x\fR, \fB\-\-test\-files\fR
In same format as `-t', output classifications of
documents in the directory ARG  The ARG must have
the same subdir names as the ARG's specified when
\fB\-\-index\fR'ing.
.TP
\fB\-X\fR, \fB\-\-test\-files\-loo\fR
Same as \fB\-\-test\-files\fR, but evaulate the files
assuming that they were part of the training data,
and doing leave-one-out cross-validation.  This
only works with the classification methods that
support leave-one-out evaluation
.IP
Splitting options:
.TP
\fB\-\-ignore\-set\fR=\fISOURCE\fR
How to select the ignored documents.  Same format
as \fB\-\-test\-set\fR.  Default is `0'.
.TP
\fB\-\-set\-files\-use\-basename\fR[=\fIN\fR]
When using files to specify doc types, compare
only the last N components the doc's pathname.
That is use the filename and the last N-1
directory names.  If N is not specified, it
defaults to 1.
.TP
\fB\-\-test\-set\fR=\fISOURCE\fR
How to select the testing documents.  A number
between 0 and 1 inclusive with a decimal point
indicates a random fraction of all documents.  The
number of documents selected from each class is
determined by attempting to match the proportions
of the non-ignore documents.  A number with no
decimal point indicates the number of documents to
select randomly.  Alternatively, a suffix of `pc'
indicates the number of documents per-class to
tag.  The suffix 't' for a number or proportion
indicates to tag documents from the pool of
training documents, not the untagged documents.
`remaining' selects all documents that remain
untagged at the end.  Anything else is interpreted
as a filename listing documents to select.
Default is `0.0'.
.TP
\fB\-\-train\-set\fR=\fISOURCE\fR
How to select the training documents.  Same format
as \fB\-\-test\-set\fR.  Default is `remaining'.
.TP
\fB\-\-unlabeled\-set\fR=\fISOURCE\fR How to select the unlabeled documents.
Same
format as \fB\-\-test\-set\fR.  Default is `0'.
.TP
\fB\-\-validation\-set\fR=\fISOURCE\fR
How to select the validation documents.  Same
format as \fB\-\-test\-set\fR.  Default is `0'.
.IP
For building data structures from text files:
.TP
\fB\-i\fR, \fB\-\-index\fR
Tokenize training documents found under
directories ARG... (where each ARG directory
contains documents of a different class), build
token-document matrix, and save it to disk.
.TP
\fB\-\-index\-lines\fR=\fIFILENAME\fR Read documents' contents from the filename
argument, one-per-line.  The first two
space-delimited words on each line are the
document name and class name respectively
.TP
\fB\-\-index\-matrix\fR=\fIFORMAT\fR
Read document/word statistics from a file in the
format produced by \fB\-\-print\-matrix\fR=\fIFORMAT\fR.  See
\fB\-\-print\-matrix\fR for details about FORMAT.
.IP
For doing document classification using the token-document matrix built with
\fB\-i\fR:
.TP
\fB\-\-forking\-query\-server\fR=\fIPORTNUM\fR
Same as `--query-server', except allow multiple
clients at once by forking for each client.
.TP
\fB\-\-print\-doc\-length\fR
When printing the classification scores for each
test document, at the end also print the number of
words in the document.  This only works with the
\fB\-\-test\fR option.
.TP
\fB\-q\fR, \fB\-\-query\fR[=\fIFILE\fR]
Tokenize input from stdin [or FILE], then print
classification scores.
.TP
\fB\-\-query\-server\fR=\fIPORTNUM\fR Run rainbow in server mode, listening on socket
number PORTNUM.  You can try it by executing this
command, then in a different shell window on the
same machine typing `telnet localhost PORTNUM'.
.TP
\fB\-r\fR, \fB\-\-repeat\fR
Prompt for repeated queries.
.IP
Rainbow-specific vocabulary options:
.TP
\fB\-\-hide\-vocab\-in\-file\fR=\fIFILE\fR
Hide from the vocabulary all words read as
space-separated strings from FILE.  Note that
regular lexing is not done on these strings.
.TP
\fB\-\-hide\-vocab\-indices\-in\-file\fR=\fIFILE\fR
Hide from the vocabulary all words read as
space-separated word integer indices from FILE.
.TP
\fB\-\-use\-vocab\-in\-file\fR=\fIFILE\fR
Limit vocabulary to just those words read as
space-separated strings from FILE.  Note that
regular lexing is not done on these strings.
.IP
Testing documents that were indexed with `-i':
.TP
\fB\-t\fR, \fB\-\-test\fR=\fIN\fR
Perform N test/train splits of the indexed
documents, and output classifications of all test
documents each time.  The parameters of the
test/train splits are determined by the option
`--test-set' and its siblings
.TP
\fB\-\-test\-on\-training\fR=\fIN\fR
Like `--test', but instead of classifing the
held-out test documents classify the training data
in leave-one-out fashion.  Perform N trials.
.IP
Diagnostics:
.TP
\fB\-\-build\-and\-save\fR
Builds a class model and saves it to disk.  This
option is unstable.
.TP
\fB\-B\fR, \fB\-\-print\-matrix\fR[=\fIFORMAT\fR]
Print the word/document count matrix in an awk-
or perl-accessible format.  Format is specified by
the following letters:
.SS "print all vocab or just words in document:"
.IP
a=all OR s=sparse
.SS "print counts as ints or binary:"
.IP
b=binary OR i=integer
.SS "print word as:"
.IP
n=integer index OR w=string OR e=empty OR
.IP
c=combination
The default is the last in each list
.TP
\fB\-F\fR, \fB\-\-print\-word\-foilgain\fR=\fICLASSNAME\fR
Print the word/foilgain vector for CLASSNAME.  See
Mitchell's Machine Learning textbook for a
description of foilgain.
.TP
\fB\-I\fR, \fB\-\-print\-word\-infogain\fR=\fIN\fR
Print the N words with the highest information
gain.
.TP
\fB\-\-print\-doc\-names\fR[=\fITAG\fR]
Print the filenames of documents contained in
the model.  If the optional TAG argument is given,
print only the documents that have the specified
tag, where TAG might be `train', `test', etc.
.TP
\fB\-\-print\-log\-odds\-ratio\fR[=\fIN\fR]
For each class, print the N words with the
highest log odds ratio score.  Default is N=10.
.TP
\fB\-\-print\-word\-counts\fR=\fIWORD\fR
Print the number of times WORD occurs in each
class.
.TP
\fB\-\-print\-word\-pair\-infogain\fR=\fIN\fR
Print the N word-pairs, which when co-occuring in
a document, have the highest information gain.
(Unfinished; ignores N.)
.TP
\fB\-\-print\-word\-probabilities\fR=\fICLASS\fR
Print P(w|CLASS), the probability in class CLASS
of each word in the vocabulary.
.TP
\fB\-\-test\-from\-saved\fR
Classify using the class model saved to disk.
This option is unstable.
.TP
\fB\-\-use\-saved\-classifier\fR Don't ever re-train the classifier.
Use whatever
class barrel was saved to disk.  This option
designed for use with \fB\-\-query\-server\fR
.TP
\fB\-W\fR, \fB\-\-print\-word\-weights\fR=\fICLASSNAME\fR
Print the word/weight vector for CLASSNAME, sorted
with high weights first.  The meaning of `weight'
is undefined.
.IP
Probabilistic Indexing options, \fB\-\-method\fR=\fIprind\fR:
.TP
\fB\-G\fR, \fB\-\-prind\-no\-foilgain\-weight\-scaling\fR
Don't have PrInd scale its weights by Quinlan's
FoilGain.
.TP
\fB\-N\fR, \fB\-\-prind\-no\-score\-normalization\fR
Don't have PrInd normalize its class scores to sum
to one.
.TP
\fB\-\-prind\-non\-uniform\-priors\fR
Make PrInd use non-uniform class priors.
.IP
General options
.TP
\fB\-\-annotations\fR=\fIFILE\fR
The sarray file containing annotations for the
files in the index
.TP
\fB\-b\fR, \fB\-\-no\-backspaces\fR
Don't use backspace when verbosifying progress
(good for use in emacs)
.TP
\fB\-d\fR, \fB\-\-data\-dir\fR=\fIDIR\fR
Set the directory in which to read/write
word-vector data (default=~/.<program_name>).
.TP
\fB\-\-random\-seed\fR=\fINUM\fR
The non-negative integer to use for seeding the
random number generator
.TP
\fB\-\-score\-precision\fR=\fINUM\fR
The number of decimal digits to print when
displaying document scores
.TP
\fB\-v\fR, \fB\-\-verbosity\fR=\fILEVEL\fR
Set amount of info printed while running;
(0=silent, 1=quiet, 2=show-progess,...5=max)
.IP
Lexing options
.TP
\fB\-\-append\-stoplist\-file\fR=\fIFILE\fR
Add words in FILE to the stoplist.
.TP
\fB\-\-exclude\-filename\fR=\fIFILENAME\fR
When scanning directories for text files, skip
files with name matching FILENAME.
.TP
\fB\-g\fR, \fB\-\-gram\-size\fR=\fIN\fR
Create tokens for all 1-grams,... N-grams.
.TP
\fB\-h\fR, \fB\-\-skip\-header\fR
Avoid lexing news/mail headers by scanning forward
until two newlines.
.TP
\fB\-\-istext\-avoid\-uuencode\fR
Check for uuencoded blocks before saying that
the file is text, and say no if there are many
lines of the same length.
.TP
\fB\-\-lex\-pipe\-command\fR=\fISHELLCMD\fR
Pipe files through this shell command before
lexing them.
.TP
\fB\-\-max\-num\-words\-per\-document\fR=\fIN\fR
Only tokenize the first N words in each document.
.TP
\fB\-\-no\-stemming\fR
Do not modify lexed words with a stemming
function. (usually the default, depending on
lexer)
.TP
\fB\-\-replace\-stoplist\-file\fR=\fIFILE\fR
Empty the default stoplist, and add
space-delimited words from FILE.
.TP
\fB\-s\fR, \fB\-\-no\-stoplist\fR
Do not toss lexed words that appear in the
stoplist.
.TP
\fB\-\-shortest\-word\fR=\fILENGTH\fR Toss lexed words that are shorter than LENGTH.
Default is usually 2.
.TP
\fB\-S\fR, \fB\-\-use\-stemming\fR
Modify lexed words with the `Porter' stemming
function.
.TP
\fB\-\-use\-stoplist\fR
Toss lexed words that appear in the stoplist.
(usually the default SMART stoplist, depending on
lexer)
.TP
\fB\-\-use\-unknown\-word\fR
When used in conjunction with \fB\-O\fR or \fB\-D\fR, captures
all words with occurrence counts below threshold
as the `<unknown>' token
.TP
\fB\-\-xxx\-words\-only\fR
Only tokenize words with `xxx' in them
.IP
Mutually exclusive choice of lexers
.TP
\fB\-\-flex\-mail\fR
Use a mail-specific flex lexer
.TP
\fB\-\-flex\-tagged\fR
Use a tagged flex lexer
.TP
\fB\-H\fR, \fB\-\-skip\-html\fR
Skip HTML tokens when lexing.
.TP
\fB\-\-lex\-alphanum\fR
Use a special lexer that includes digits in
tokens, delimiting tokens only by non-alphanumeric
characters.
.TP
\fB\-\-lex\-infix\-string\fR=\fIARG\fR Use only the characters after ARG in each word for
stoplisting and stemming.  If a word does not
contain ARG, the entire word is used.
.TP
\fB\-\-lex\-suffixing\fR
Use a special lexer that adds suffixes depending
on Email-style headers.
.TP
\fB\-\-lex\-white\fR
Use a special lexer that delimits tokens by
whitespace only, and does not change the contents
of the token at all---no downcasing, no stemming,
no stoplist, nothing.  Ideal for use with an
externally-written lexer interfaced to rainbow
with \fB\-\-lex\-pipe\-cmd\fR.
.IP
Feature-selection options
.TP
\fB\-D\fR, \fB\-\-prune\-vocab\-by\-doc\-count\fR=\fIN\fR
Remove words that occur in N or fewer documents.
.TP
\fB\-O\fR, \fB\-\-prune\-vocab\-by\-occur\-count\fR=\fIN\fR
Remove words that occur less than N times.
.TP
\fB\-T\fR, \fB\-\-prune\-vocab\-by\-infogain\fR=\fIN\fR
Remove all but the top N words by selecting words
with highest information gain.
.IP
Weight-vector setting/scoring method options
.TP
\fB\-\-binary\-word\-counts\fR
Instead of using integer occurrence counts of
words to set weights, use binary
absence/presence.
.TP
\fB\-\-event\-document\-then\-word\-document\-length\fR=\fINUM\fR
Set the normalized length of documents when
\fB\-\-event\-model\fR=\fIdocument\-then\-word\fR
.TP
\fB\-\-event\-model\fR=\fIEVENTNAME\fR
Set what objects will be considered the
`events' of the probabilistic model.  EVENTNAME
can be one of: word, document, document-then-word.
.IP
Default is `word'.
.TP
\fB\-\-infogain\-event\-model\fR=\fIEVENTNAME\fR
Set what objects will be considered the `events'
when information gain is calculated.  EVENTNAME
can be one of: word, document, document-then-word.
.IP
Default is `document'.
.TP
\fB\-m\fR, \fB\-\-method\fR=\fIMETHOD\fR
Set the word weight-setting method; METHOD may be
one of: active, em, emsimple, kl, knn, maxent,
naivebayes, nbshrinkage, nbsimple, prind,
tfidf_words, tfidf_log_words, tfidf_log_occur,
tfidf, svm, default=naivebayes.
.TP
\fB\-\-print\-word\-scores\fR
During scoring, print the contribution of each
word to each class.
.TP
\fB\-\-smoothing\-dirichlet\-filename\fR=\fIFILE\fR
The file containing the alphas for the dirichlet
smoothing.
.TP
\fB\-\-smoothing\-dirichlet\-weight\fR=\fINUM\fR
The weighting factor by which to muliply the
alphas for dirichlet smoothing.
.TP
\fB\-\-smoothing\-goodturing\-k\fR=\fINUM\fR
Smooth word probabilities for words that occur NUM
or less times. The default is 7.
.TP
\fB\-\-smoothing\-method\fR=\fIMETHOD\fR
Set the method for smoothing word
probabilities to avoid zeros; METHOD may be one
of: goodturing, laplace, mestimate, wittenbell
.TP
\fB\-\-uniform\-class\-priors\fR When setting weights, calculating infogain and
scoring, use equal prior probabilities on
classes.
.IP
Support Vector Machine options, \fB\-\-method\fR=\fIsvm\fR:
.TP
\fB\-\-svm\-active\-learning=\fR Use active learning to query the labels &
incrementally (by arg_size) build the barrels.
.TP
\fB\-\-svm\-active\-learning\-baseline=\fR
Incrementally add documents to the training set at
random.
.TP
\fB\-\-svm\-al\-transduce\fR
do transduction over the unlabeled data during
active learning.
.TP
\fB\-\-svm\-al_init_tsetsize=\fR
Number of random documents to start with in
active learning.
.TP
\fB\-\-svm\-bsize=\fR
maximum size to construct the subproblems.
.TP
\fB\-\-svm\-cache\-size=\fR
Number of kernel evaluations to cache.
.TP
\fB\-\-svm\-cost=\fR
cost to bound the lagrange multipliers by (default
1000).
.TP
\fB\-\-svm\-df\-counts=\fR
Set df_counts (0=occurrences, 1=words).
.TP
\fB\-\-svm\-epsilon_a=\fR
tolerance for the bounds of the lagrange
multipliers (default 0.0001).
.TP
\fB\-\-svm\-kernel=\fR
type of kernel to use (0=linear, 1=polynomial,
2=gassian, 3=sigmoid, 4=fisher kernel).
.TP
\fB\-\-svm\-quick\-scoring\fR
Turn quick scoring on.
.TP
\fB\-\-svm\-remove\-misclassified=\fR
Remove all of the misclassified examples and
retrain (default none (0), 1=bound, 2=wrong.
.TP
\fB\-\-svm\-rseed=\fR
what random seed should be used in the
test-in-train splits
.TP
\fB\-\-svm\-start\-at=\fR
which model should be the first generated.
.TP
\fB\-\-svm\-suppress\-score\-matrix\fR
Do not print the scores of each test document at
each AL iteration.
.TP
\fB\-\-svm\-test\-in\-train\fR
do active learning testing inside of the
training...  a hack around making code 10 times
more complicated.
.TP
\fB\-\-svm\-tf\-transform=\fR
0=raw, 1=log...
.TP
\fB\-\-svm\-trans\-cost=\fR
value to assign to C* (default 200).
.TP
\fB\-\-svm\-trans\-hyp\-refresh=\fR
how often the hyperplane should be recomputed
during transduction.  Only applies to SMO.
(default 40)
.TP
\fB\-\-svm\-trans\-nobias\fR
Do not use a bias when marking unlabeled
documents.  Use a threshold of 0 to determine
labels instead of some threshold tomark a certain
number of documents for each class.
.TP
\fB\-\-svm\-trans\-npos=\fR
number of unlabeled documents to label as positive
(default: proportional to number of labeled
positive docs).
.TP
\fB\-\-svm\-trans\-smart\-vals=\fR
use previous problem's as a starting point for
the next. (default true)
.TP
\fB\-\-svm\-transduce\-class=\fR override default class(es) (int) to do
transduction with (default bow_doc_unlabeled).
.TP
\fB\-\-svm\-use\-smo=\fR
default 1 (use SMO) - PR_LOQO not compiled
.TP
\fB\-\-svm\-vote=\fR
Type of voting to use (0=singular, 1=pairwise;
default 0).
.TP
\fB\-\-svm\-weight=\fR
type of function to use to set the weights of the
documents' words (0=raw_frequency, 1=tfidf,
2=infogain.
.IP
Naive Bayes options, \fB\-\-method\fR=\fInaivebayes\fR:
.TP
\fB\-\-naivebayes\-binary\-scoring\fR
When using naivebayes, use hacky scoring to get
good Precision-Recall curves.
.TP
\fB\-\-naivebayes\-m\-est\-m\fR=\fIM\fR When using `m'-estimates for smoothing in
NaiveBayes, use M as the value for `m'.  The
default is the size of vocabulary.
.TP
\fB\-\-naivebayes\-normalize\-log\fR
When using naivebayes, return \fB\-1\fR/log(P(C|d),
normalized to sum to one instead of P(C|d).  This
results in values that are not so close to zero
and one.
.IP
Maximum Entropy options, \fB\-\-method\fR=\fImaxent\fR:
.TP
\fB\-\-maxent\-constraint\-docs\fR=\fITYPE\fR
The documents to use for setting the constraints.
The default is train. The other choice is
trainandunlabeled.
.TP
\fB\-\-maxent\-gaussian\-prior\fR
Add a Gaussian prior to each word/class feature
constraint.
.TP
\fB\-\-maxent\-gaussian\-prior\-no\-zero\-constraints\fR
When using a gaussian prior, do not enforce
constraints that have notraining data.
.TP
\fB\-\-maxent\-halt\-by\-accuracy\fR=\fITYPE\fR
When running maxent, halt iterations using the
accuracy of documents.  TYPE is type of
documentsto test.  See
`--em-halt-using-perplexity` for choices for TYPE
.TP
\fB\-\-maxent\-halt\-by\-logprob\fR=\fITYPE\fR
When running maxent, halt iterations using the
logprob of documents.  TYPE is type of documentsto
test.  See `--em-halt-using-perplexity` for
choices for TYPE
.TP
\fB\-\-maxent\-iteration\-docs\fR=\fITYPE\fR
The types of documents to use for maxent
iterations.  The default is train.  TYPE is type
of documents to test.  See
`--em-halt-using-perplexity` for choices for TYPE
.TP
\fB\-\-maxent\-iterations\fR=\fINUM\fR
The number of iterative scaling iterations to
perform.  The default is 40.
.TP
\fB\-\-maxent\-keep\-features\-by\-mi\fR=\fINUM\fR
The number of top words by mutual information per
class to use as features.  Zeroimplies no pruning
and is the default.
.TP
\fB\-\-maxent\-logprob\-constraints\fR
Set constraints to be the log prob of the word.
.TP
\fB\-\-maxent\-print\-accuracy\fR=\fITYPE\fR
When running maximum entropy, print the accuracy
of documents at each round.  TYPE is type of
document to measure perplexity on.  See
`--em-halt-using-perplexity` for choices for TYPE
.TP
\fB\-\-maxent\-prior\-variance\fR=\fINUM\fR
The variance to use for the Gaussian prior.  The
default is 0.01.
.TP
\fB\-\-maxent\-prune\-features\-by\-count\fR=\fINUM\fR
Prune the word/class feature set, keeping only
those features that haveat least NUM occurrences
in the training set.
.TP
\fB\-\-maxent\-scoring\-hack\fR
Use smoothed naive Bayes probability for zero
occuring word/class pairs during scoring
.TP
\fB\-\-maxent\-smooth\-counts\fR Add 1 to the count of each word/class pair when
calculating the constraint values.
.TP
\fB\-\-maxent\-vary\-prior\-by\-count\fR
Multiply log (1 + N(w,c)) times variance when
using a gaussian prior.
.TP
\fB\-\-maxent\-vary\-prior\-by\-count\-linearly\fR
Mulitple N(w,c) times variance when using a
Gaussian prior.
.IP
K-nearest neighbor options, \fB\-\-method\fR=\fIknn\fR:
.TP
\fB\-\-knn\-k\fR=\fIK\fR
Number of neighbours to use for nearest neighbour.
Defaults to 30.
.TP
\fB\-\-knn\-weighting\fR=\fIxxx\fR.xxx
Weighting scheme to use, coded like SMART.
Defaults to nnn.nnnThe first three chars describe
how the model documents areweighted, the second
three describe how the test document isweighted.
The codes for each position are described in
knn.c.Classification consists of summing the
scores per class for thek nearest neighbour
documents and sorting.
.IP
EMSIMPLE options:
.TP
\fB\-\-emsimple\-no\-init\fR
Use this option when using emsimple as the
secondary method for genem
.TP
\fB\-\-emsimple\-num\-iterations\fR=\fINUM\fR
Number of EM iterations to run when building
model.
.TP
\fB\-\-emsimple\-print\-accuracy\fR=\fITYPE\fR
When running emsimple, print the accuracy of
documents at each EM round.  Type can be
validation, train, or test.
.IP
EM options:
.TP
\fB\-\-em\-anneal\fR
Use Deterministic annealing EM.
.TP
\fB\-\-em\-anneal\-normalizer\fR When running EM, do deterministic annealing-ish
stuff with the unlabeled normalizer.
.TP
\fB\-\-em\-binary\fR
Do special tricks for the binary case.
.TP
\fB\-\-em\-binary\-neg\-classname\fR=\fICLASS\fR
Specify the name of the negative class if building
a binary classifier.
.TP
\fB\-\-em\-binary\-pos\-classname\fR=\fICLASS\fR
Specify the name of the positive class if building
a binary classifier.
.TP
\fB\-\-em\-compare\-to\-nb\fR
When building an EM class barrel, show doc stats
for the naivebayesbarrel equivalent.  Only use in
conjunction with \fB\-\-test\fR.
.TP
\fB\-\-em\-crossentropy\fR
Use crossentropy instead of naivebayes for
scoring.
.TP
\fB\-\-em\-halt\-using\-accuracy\fR=\fITYPE\fR
When running EM, halt when accuracy plateaus.
TYPE is type of document to measure perplexity on.
.IP
Choices are `validation', `train', `test',
.IP
`unlabeled' and `trainandunlabeled' and
`trainandunlabeledloo'
.TP
\fB\-\-em\-halt\-using\-perplexity\fR=\fITYPE\fR
When running EM, halt when perplexity plataeus.
TYPE is type of document to measure perplexity on.
.IP
Choices are `validation', `train', `test',
.TP
`unlabeled',
`trainandunlabeled' and
.IP
`trainandunlabeledloo'
.TP
\fB\-\-em\-labeled\-for\-start\-only\fR
Use the labeled documents to set the starting
point for EM, butignore them during the
iterations
.TP
\fB\-\-em\-multi\-hump\-init\fR=\fIMETHOD\fR
When initializing mixture components, how to
assign component probs to documents.  Default is
`spread'.  Other choices are `spiked'.
.TP
\fB\-\-em\-multi\-hump\-neg\fR=\fINUM\fR
Use NUM center negative classes. Only use in
binary case.Must be using scoring method
nb_score.
.TP
\fB\-\-em\-num\-iterations\fR=\fINUM\fR
Number of EM iterations to run when building
model.
.TP
\fB\-\-em\-perturb\-starting\-point\fR=\fITYPE\fR
Instead of starting EM with P(w|c) from the
labeled training data, start from values that are
randomly sampled from the multinomial specified by
the labeled training data.  TYPE specifies what
distribution to use for the perturbation; choices
are `gaussian' `dirichlet', and `none'.  Default
is `none'.
.TP
\fB\-\-em\-print\-accuracy\fR=\fITYPE\fR
When running EM, print the accuracy of
documents at each round.  TYPE is type of document
to measure perplexity on.  See
`--em-halt-using-perplexity` for choices for TYPE
.TP
\fB\-\-em\-print\-perplexity\fR=\fITYPE\fR
When running EM, print the perplexity of
documents at each round.  TYPE is type of document
to measure perplexity on.  See
`--em-halt-using-perplexity` for choices for TYPE
.TP
\fB\-\-em\-print\-top\-words\fR
Print the top 10 words per class for each EM
iteration.
.TP
\fB\-\-em\-save\-probs\fR
On each EM iteration, save all P(C|w) to a file.
.TP
\fB\-\-em\-set\-vocab\-from\-unlabeled\fR
Remove words from the vocabulary not used in the
unlabeled data
.TP
\fB\-\-em\-stat\-method\fR=\fISTAT\fR
The method to convert scores to probabilities.The
default is 'nb_score'.
.TP
\fB\-\-em\-temp\-reduce\fR=\fINUM\fR
Temperature reduction factor for deterministic
annealing.  Default is 0.9.
.TP
\fB\-\-em\-temperature\fR=\fINUM\fR
Initial temperature for deterministic annealing.
Default is 200.
.TP
\fB\-\-em\-unlabeled\-normalizer\fR=\fINUM\fR
Number of unlabeled docs it takes to equal a
labeled doc.Defaults to one.
.TP
\fB\-\-em\-unlabeled\-start\fR=\fITYPE\fR
When initializing the EM starting point, how
the unlabeled docs contribute.  Default is `zero'.
.TP
Other choices are `prior' `random'
and `even'.
.IP
Active Learning options:
.TP
\fB\-\-active\-add\-per\-round\fR=\fINUM\fR
Specify the number of documents to label
each round.  The default is 4.
.TP
\fB\-\-active\-beta\fR=\fINUM\fR
Increase spread of document densities.
.TP
\fB\-\-active\-binary\-pos\fR=\fICLASS\fR
The name of the positive class for binary
classification.  Required forrelevance sampling.
.TP
\fB\-\-active\-committee\-size\fR=\fINUM\fR
The number of committee members to use with QBC.
Default is 1.
.TP
\fB\-\-active\-final\-em\fR
Finish with a full round of EM.
.TP
\fB\-\-active\-no\-final\-em\fR
Finish without a full round of EM.
.TP
\fB\-\-active\-num\-rounds\fR=\fINUM\fR
The number of active learning rounds to
perform.  The default is 10.
.TP
\fB\-\-active\-perturb\-after\-em\fR
Perturb after running EM to create committee
members.
.TP
\fB\-\-active\-pr\-print\-stat\-summary\fR
Print the precision recall curves used for score
to probability remapping.
.TP
\fB\-\-active\-pr\-window\-size\fR=\fINUM\fR
Set the window size for precision-recall score to
probability remapping.The default is 20.
.TP
\fB\-\-active\-print\-committee\-matrices\fR
Print the confusion matrix for each committee
member at each round.
.TP
\fB\-\-active\-qbc\-low\-kl\fR
Select documents with the lowest kl-divergence
instead of the highest.
.TP
\fB\-\-active\-remap\-scores\-pr\fR
Remap scores with sneaky precision-recall
tricks.
.TP
\fB\-\-active\-secondary\-method\fR=\fIMETHOD\fR
The underlying method for active learning to use.
The default is 'naivebayes'.
.TP
\fB\-\-active\-selection\-method\fR=\fIMETHOD\fR
Specify the selection method for picking unlabeled
docs. One of uncertainty, relevance, qbc, random.
The default is 'uncertainty'.
.TP
\fB\-\-active\-stream\-epsilon\fR=\fINUM\fR
The rate factor for selecting documents in stream
sampling.
.TP
\fB\-\-active\-test\-stats\fR
Generate output for test docs every n rounds.
.TP
-?, \fB\-\-help\fR
Give this help list
.TP
\fB\-\-usage\fR
Give a short usage message
.TP
\fB\-V\fR, \fB\-\-version\fR
Print program version
.PP
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
.SH "REPORTING BUGS"
Please report bugs related to this program to Andrew McCallum
<mccallum@cs.cmu.edu>. If the bugs are related to the Debian package 
send bugs to submit@bugs.debian.org.
.SH "SEE ALSO"
.BR arrow (1),
.BR archer (1),
.BR crossbow (1).
.PP
The full documentation for
.B arrow
will be provided as a Texinfo manual.  If the
.B info
and
.B arrow
programs are properly installed at your site, the command
.IP
.B info arrow
.PP
should give you access to the complete manual.
.PP 
You can also find documentation and updates for 
.B libbow 
at http://www.cs.cmu.edu/~mccallum/bow