1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299
|
=head1 NAME
The Swish-e FAQ - Answers to Common Questions
=head1 Frequently Asked Questions
=head2 General Questions
=head3 What is Swish-e?
Swish-e is B<S>imple B<W>eb B<I>ndexing B<S>ystem for B<H>umans -
B<E>nhanced. With it, you can quickly and easily index directories of
files or remote web sites and search the generated indexes for words
and phrases.
=head3 So, is Swish-e a search engine?
Well, yes. Probably the most common use of Swish-e is to provide a search
engine for web sites. The Swish-e distribution includes CGI scripts that
can be used with it to add a I<search engine> for your web site. The CGI
scripts can be found in the F<example> directory of the distribution
package. See the F<README> file for information about the scripts.
But Swish-e can also be used to index all sorts of data, such as email
messages, data stored in a relational database management system,
XML documents, or documents such as Word and PDF documents -- or any
combination of those sources at the same time. Searches can be limited
to fields or I<MetaNames> within a document, or limited to areas within
an HTML document (e.g. body, title). Programs other than CGI applications
can use Swish-e, as well.
=head3 Should I upgrade if I'm already running a previous version
of Swish-e?
A large number of bug fixes, feature additions, and logic corrections were
made in version 2.2. In addition, indexing speed has been drastically
improved (reports of indexing times changing from four hours to 5
minutes), and major parts of the indexing and search parsers have been
rewritten. There's better debugging options, enhanced output formats,
more document meta data (e.g. last modified date, document summary),
options for indexing from external data sources, and faster spidering
just to name a few changes. (See the CHANGES file for more information.
Since so much effort has gone into version 2.2, support for previous
versions will probably be limited.
=head3 Are there binary distributions available for Swish-e on platform foo?
Foo? Well, yes there are some binary distributions available. Please see
the Swish-e web site for a list at http://swish-e.org/.
In general, it is recommended that you build Swish-e from source,
if possible.
=head3 Do I need to reindex my site each time I upgrade to a new Swish-e
version?
At times it might not strictly be necessary, but since you don't really
know if anything in the index has changed, it is a good rule to reindex.
=head3 What's the advantage of using the libxml2 library for parsing HTML?
Swish-e may be linked with libxml2, a library for working with HTML and XML
documents. Swish-e can use libxml2 for parsing HTML and XML documents.
The libxml2 parser is a better parser than Swish-e's built-in HTML
parser. It offers more features, and it does a much better job at
extracting out the text from a web page. In addition, you can use the
C<ParserWarningLevel> configuration setting to find structural errors
in your documents that could (and would with Swish-e's HTML parser)
cause documents to be indexed incorrectly.
Libxml2 is not required, but is strongly recommended for parsing HTML
documents. It's also recommended for parsing XML, as it offers many
more features than the internal Expat xml.c parser.
The internal HTML parser will have limited support, and does have a
number of bugs. For example, HTML entities may not always be correctly
converted and properties do not have entities converted. The internal
parser tends to get confused when invalid HTML is parsed where the libxml2
parser doesn't get confused as often. The structure is better detected
with the libxml2 parser.
If you are using the Perl module (the C interface to the Swish-e
library) you may wish to build two versions of Swish-e, one with the
libxml2 library linked in the binary, and one without, and build the
Perl module against the library without the libxml2 code. This is to
save space in the library. Hopefully, the library will someday soon be
split into indexing and searching code (volunteers welcome).
=head3 Does Swish-e include a CGI interface?
Yes. Kind of.
There's two example CGI scripts included, swish.cgi and search.cgi.
Both are installed at F<$prefix/lib/swish-e>.
Both require a bit of work to setup and use. Swish.cgi is probably what most
people will want to use as it contains more features. Search.cgi is for those
that want to start with a small script and customize it to fit their needs.
An example of using swish.cgi is given in
the L<INSTALL|INSTALL> man page, and it the swish.cgi documentation.
Like often is the case, it will be easier to use if you first read the documentation.
Please use caution about CGI scripts found on the Internet for use with Swish-e.
Some are not secure.
The included example CGI scripts were designed with security in mind.
Regardless, you are encouraged to have your local Perl expert review it
(and all other CGI scripts you use) before placing it into production.
This is just a good policy to follow.
=head3 How secure is Swish-e?
We know of no security issues with using Swish-e. Careful attention
has been made with regard to common security problems such as buffer
overruns when programming Swish-e.
The most likely security issue with Swish-e is when it is run via
a poorly written CGI interface. This is not limited to CGI scripts
written in Perl, as it's just as easy to write an insecure CGI script
in C, Java, PHP, or Python. A good source of information is included
with the Perl distribution. Type C<perldoc perlsec> at your local
prompt for more information. Another must-read document is located at
C<http://www.w3.org/Security/faq/wwwsf4.html>.
Note that there are many I<free> yet insecure and poorly written CGI
scripts available -- even some designed for use with Swish-e. Please
carefully review any CGI script you use. Free is not such a good price
when you get your server hacked...
=head3 Should I run Swish-e as the superuser (root)?
No. Never.
=head3 What files does Swish-e write?
Swish writes the index file, of course. This is specified with the
C<IndexFile> configuration directive or by the C<-f> command line switch.
The index file is actually a collection of files, but all start with
the file name specified with the C<IndexFile> directive or the C<-f>
command line switch.
For example, the file ending in F<.prop> contains the document properties.
When creating the index files Swish-e appends the extension F<.temp>
to the index file names. When indexing is complete Swish-e renames the
F<.temp> files to the index files specified by C<IndexFile> or C<-f>.
This is done so that existing indexes remain untouched until it completes
indexing.
Swish-e also writes temporary files in some cases during indexing
(e.g. C<-s http>, C<-s prog> with filters), when merging, and when
using C<-e>). Temporary files are created with the mkstemp(3) function
(with 0600 permission on unix-like operating systems).
The temporary files are created in the directory specified by the
environment variables C<TMPDIR> and C<TMP> in that order. If those
are not set then swish uses the setting the configuration setting
L<TmpDir|SWISH-CONFIG/"item_TmpDir">. Otherwise, the temporary file
will be located in the current directory.
=head3 Can I index PDF and MS-Word documents?
Yes, you can use a I<Filter> to convert documents while indexing, or you
can use a program that "feeds" documents to Swish-e that have already
been converted. See C<Indexing> below.
=head3 Can I index documents on a web server?
Yes, Swish-e provides two ways to index (spider) documents on a web
server. See C<Spidering> below.
Swish-e can retrieve documents from a file system or from a remote web
server. It can also execute a program that returns documents back
to it. This program can retrieve documents from a database, filter
compressed documents files, convert PDF files, extract data from mail
archives, or spider remote web sites.
=head3 Can I implement keywords in my documents?
Yes, Swish-e can associate words with I<MetaNames> while indexing,
and you can limit your searches to these MetaNames while searching.
In your HTML files you can put keywords in HTML META tags or in XML blocks.
META tags can have two formats in your source documents:
<META NAME="DC.subject" CONTENT="digital libraries">
And in XML format (can also be used in HTML documents when using libxml2):
<meta2>
Some Content
</meta2>
Then, to inform Swish-e about the existence of the meta name in your
documents, edit the line in your configuration file:
MetaNames DC.subject meta1 meta2
When searching you can now limit some or all search terms to that
MetaName. For example, to look for documents that contain the word
apple and also have either fruit or cooking in the DC.subject meta tag.
=head3 What are document properties?
A document property is typically data that describes the document.
For example, properties might include a document's path name, its last
modified date, its title, or its size. Swish-e stores a document's
properties in the index file, and they can be reported back in search
results.
Swish-e also uses properties for sorting. You may sort your results by
one or more properties, in ascending or descending order.
Properties can also be defined within your documents. HTML and
XML files can specify tags (see previous question) as properties.
The I<contents> of these tags can then be returned with search results.
These user-defined properties can also be used for sorting search results.
For example, if you had the following in your documents
<meta name="creator" content="accounting department">
and C<creator> is defined as a property (see C<PropertyNames> in
L<SWISH-CONFIG|SWISH-CONFIG>) Swish-e can return C<accounting department>
with the result for that document.
swish-e -w foo -p creator
Or for sorting:
swish-e -w foo -s creator
=head3 What's the difference between MetaNames and PropertyNames?
MetaNames allows keywords searches in your documents. That is, you can
use MetaNames to restrict searches to just parts of your documents.
PropertyNames, on the other hand, define text that can be returned with
results, and can be used for sorting.
Both use I<meta tags> found in your documents (as shown in the above two
questions) to define the text you wish to use as a property or meta name.
You may define a tag as B<both> a property and a meta name. For example:
<meta name="creator" content="accounting department">
placed in your documents and then using configuration settings of:
PropertyNames creator
MetaNames creator
will allow you to limit your searches to documents created by accounting:
swish-e -w 'foo and creator=(accounting)'
That will find all documents with the word C<foo> that also have a creator
meta tag that contains the word C<accounting>. This is using MetaNames.
And you can also say:
swish-e -w foo -p creator
which will return all documents with the word C<foo>, but the results will
also include the contents of the C<creator> meta tag along with results.
This is using properties.
You can use properties and meta names at the same time, too:
swish-e -w creator=(accounting or marketing) -p creator -s creator
That searches only in the C<creator> I<meta name> for either of the words
C<accounting> or C<marketing>, prints out the contents of the contents
of the C<creator> I<property>, and sorts the results by the C<creator>
I<property name>.
(See also the C<-x> output format switch in L<SWISH-RUN|SWISH-RUN>.)
=head3 Can Swish-e index multi-byte characters?
No. This will require much work to change. But, Swish-e works with
eight-bit characters, so many characters sets can be used. Note that it
does call the ANSI-C tolower() function which does depend on the current
locale setting. See C<locale(7)> for more information.
=head2 Indexing
=head3 How do I pass Swish-e a list of files to index?
Currently, there is not a configuration directive to include a file that
contains a list of files to index. But, there is a directive to include
another configuration file.
IncludeConfigFile /path/to/other/config
And in C</path/to/other/config> you can say:
IndexDir file1 file2 file3 file4 file5 ...
IndexDir file20 file21 file22
You may also specify more than one configuration file on the command line:
./swish-e -c config_one config_two config_three
Another option is to create a directory with symbolic links of the files
to index, and index just that directory.
=head3 How does Swish-e know which parser to use?
Swish can parse HTML, XML, and text documents. The parser is set by
associating a file extension with a parser by the C<IndexContents>
directive. You may set the default parser with the C<DefaultContents>
directive. If a document is not assigned a parser it will default to
the HTML parser (HTML2 if built with libxml2).
You may use Filters or an external program to convert documents to HTML,
XML, or text.
=head3 Can I reindex and search at the same time?
Yes. Starting with version 2.2 Swish-e indexes to temporary files, and then
renames the files when indexing is complete. On most systems renames
are atomic. But, since Swish-e also generates more than one file during
indexing there will be a very short period of time between renaming the
various files when the index is out of sync.
Settings in F<src/config.h> control some options related to temporary files,
and their use during indexing.
=head3 Can I index phrases?
Phrases are indexed automatically. To search for a phrase simply place
double quotes around the phrase.
For example:
swish-e -w 'free and "fast search engine"'
=head3 How can I prevent phrases from matching across sentences?
Use the
L<BumpPositionCounterCharacters|SWISH-CONFIG/"item_BumpPositionCounterCharacters">
configuration directive.
=head3 Swish-e isn't indexing a certain word or phrase.
There are a number of configuration parameters that control what Swish-e
considers a "word" and it has a debugging feature to help pinpoint
any indexing problems.
Configuration file directives (L<SWISH-CONFIG|SWISH-CONFIG>)
C<WordCharacters>, C<BeginCharacters>, C<EndCharacters>,
C<IgnoreFirstChar>, and C<IgnoreLastChar> are the main settings that
Swish-e uses to define a "word". See L<SWISH-CONFIG|SWISH-CONFIG> and
L<SWISH-RUN|SWISH-RUN> for details.
Swish-e also uses compile-time defaults for many settings. These are
located in F<src/config.h> file.
Use of the command line arguments C<-k>, C<-v> and C<-T> are useful when
debugging these problems. Using C<-T INDEXED_WORDS> while indexing will
display each word as it is indexed. You should specify one file when
using this feature since it can generate a lot of output.
./swish-e -c my.conf -i problem.file -T INDEXED_WORDS
You may also wish to index a single file that contains words that are or
are not indexing as you expect and use -T to output debugging information
about the index. A useful command might be:
./swish-e -f index.swish-e -T INDEX_FULL
Once you see how Swish-e is parsing and indexing your words, you can
adjust the configuration settings mentioned above to control what words
are indexed.
Another useful command might be:
./swish-e -c my.conf -i problem.file -T PARSED_WORDS INDEXED_WORDS
This will show white-spaced words parsed from the document (PARSED_WORDS),
and how those words are split up into separate words for indexing
(INDEXED_WORDS).
=head3 How do I keep Swish-e from indexing numbers?
Swish-e indexes words as defined by the C<WordCharacters> setting, as
described above. So to avoid indexing numbers you simply remove digits
from the C<WordCharacters> setting.
There are also some settings in F<src/config.h> that control what "words"
are indexed. You can configure swish to never index words that are all
digits, vowels, or consonants, or that contain more than some consecutive
number of digits, vowels, or consonants. In general, you won't need to
change these settings.
Also, there's an experimental feature called C<IgnoreNumberChars>
which allows you to define a set of characters that describe a number.
If a word is made up of B<only> those characters it will not be indexed.
=head3 Swish-e crashes and burns on a certain file. What can I do?
This shouldn't happen. If it does please post to the Swish-e discussion
list the details so it can be reproduced by the developers.
In the mean time, you can use a C<FileRules> directive to exclude the
particular file name, or pathname, or its title. If there are serious
problems in indexing certain types of files, they may not have valid text
in them (they may be binary files, for instance). You can use NoContents
to exclude that type of file.
Swish-e will issue a warning if an embedded null character is found in a
document. This warning will be an indication that you are trying to index
binary data. If you need to index binary files try to find a program
that will extract out the text (e.g. strings(1), catdoc(1), pdftotext(1)).
=head3 How to I prevent indexing of some documents?
When using the file system to index your files you can use the
C<FileRules> directive. Other than C<FileRules title>, C<FileRules>
only works with the file system (C<-S fs>) indexing method, not with
C<-S prog> or C<-S http>.
If you are spidering, use a F<robots.text> file in your document root.
This is a standard way to excluded files from search engines, and is
fully supported by Swish-e. See http://www.robotstxt.org/
You can also modify the F<spider.pl> spider perl program to skip, index
content only, or spider only listed web pages. Type C<perldoc spider.pl>
in the C<prog-bin> directory for details.
If using the libxml2 library for parsing HTML, you may also use the Meta
Robots Exclusion in your documents:
<meta name="robots" content="noindex">
See the L<obeyRobotsNoIndex|SWISH-CONFIG/"item_obeyRobotsNoIndex"> directive.
=head3 How do I prevent indexing parts of a document?
To prevent Swish-e from indexing a common header, footer, or navigation
bar, AND you are using libxml2 for parsing HTML, then you may
use a fake HTML tag around the text you wish to ignore and use the
C<IgnoreMetaTags> directive. This will generate an error message if
the C<ParserWarningLevel> is set as it's invalid HTML.
C<IgnoreMetaTags> works with XML documents (and HTML documents when
using libxml2 as the parser), but not with documents parsed by the text
(TXT) parser.
If you are using the libxml2 parser (HTML2 and XML2) then you can use the the following
comments in your documents to prevent indexing:
<!-- SwishCommand noindex -->
<!-- SwishCommand index -->
and/or these may be used also:
<!-- noindex -->
<!-- index -->
=head3 How do I modify the path or URL of the indexed documents.
Use the C<ReplaceRules> configuration directive to rewrite path names
and URLs. If you are using C<-S prog> input method you may set the path
to any string.
=head3 How can I index data from a database?
Use the "prog" document source method of indexing. Write a program to
extract out the data from your database, and format it as XML, HTML,
or text. See the examples in the C<prog-bin> directory, and the next
question.
=head3 How do I index my PDF, Word, and compressed documents?
Swish-e can internally only parse HTML, XML and TXT (text) files by
default, but can make use of I<filters> that will convert other types
of files such as MS Word documents, PDF, or gzipped files into one of
the file types that Swish-e understands.
Please see L<SWISH-CONFIG|SWISH-CONFIG/"Document Filter Directives">
and the examples in the F<filters> and F<filter-bin> directory for more information.
See the next question to learn about the filtering options with Swish-e.
=head3 How do I filter documents?
The term "filter" in Swish-e means the converstion of a document of one type (one that
swish-e cannot index directly) into a type that Swish-e can index, namely HTML, plain text, or XML.
To add to the confusion, there are a number of ways to accomplish this in Swish-e.
So here's a bit of background.
The L<FileFilter|SWISH-CONFIG/"Document Filter Directives"> directive was added to swish first.
This feature allows you to specify a program to run for documents that match a given file extension.
For example, to filter PDF files (files that end in .pdf) you can specify the configuation setting of:
FileFilter .pdf pdftotext "'%p' -"
which says to run the program "pdftotext" passing it the pathname of the file (%p)
and a dash (which tells pdftotext to output to stdout). Then for each .pdf file Swish-e runs this
program and reads in the filtered document from the output from the filter program.
This has the advantage that it is easy to setup -- a single line in the config file is all that is
needed to add the filter into Swish-e. But it also has a number of problems. For example,
if you use a Perl script to do your filtering it can be very slow since the filter script must be
run (and thus compiled) for each processed document.
This is exacerbated when using the -S http method since the -S http method also uses a Perl script
that is run for every URL fetched. Also, when using -S prog method of input
(reading input from a program) using FileFilter means that Swish-e must first read the file
in from the external program and then write the file out to a temporary file before running the
filter.
With -S prog it makes much more sense to filter the document in the program that is
fetching the documents than to have swish-e read the file into memory, write it to a temporary
file and then run an external program.
The Swish-e distribution contains a couple of example -S prog programs. F<spider.pl> is a reasonably
full-featured web spider that offers many more options than the -S http method. And it is much faster
than running -S http, too.
The spider has a perl configuration file, which means you can add programming logic right into the
configuration file without editing the spider program. One bit of logic that is provided in the
spider's configuration file is a "call-back" function that allows you to filter the content.
In other words, before the spider passes a fetched web document to swish for indexing the spider can call
a simple subroutine in the spider's configuration file passing the document and its content type.
The subroutine can then look at the content type and decide if the document needs to be filtered.
For example, when processing a document of type "application/msword" the call-back subroutine
might call the doc2txt.pm perl module, and a document of type
"appliation/pdf" could use the pdf2html.pm module. The F<prog-bin/SwishSpiderConfig.pl> file
shows this usage.
This system works reasonably well, but also means that more work is required
to setup the filters. First, you must explicitly check for specific content types and then call
the appropriate Perl module, and second, you have to know how each module must be called and how
each returns the possibly modified content.
In comes SWISH::Filter.
To make things easier the SWISH::Filter Perl module was created. The idea of this module is that
there is one interface used to filter all types of documents. So instead of checking for specific
types of content you just pass the content type and the document to the SWISH::Filter module and
it returns a new content type and document if it was filtered. The filters that do the actual work
are designed with a standard interface and work like filter "plug-ins". Adding new filters
means just downloading the filter to a directory and no changes are needed to the spider's configuation
file. Download a filter for Postscript and next time you run indexing your Postscript files will be indexed.
Since the filters are standardized, hopefully when you have the need to filter documents of a specific
type there will already be a filter ready for your use.
Now, note that the perl modules may or may not do the actual conversion of a document.
For example, the PDF conversion
module calls the pdfinfo and pdftotext programs. Those programs (part of the Xpfd package)
must be installed separately from the filters.
The SwishSpiderConfig.pl examle spider configuration file shows how to use the SWISH::Filter module for filtering.
This file is installed at $prefix/share/doc/swish-e/examples/prog-bin, where $prefix is normally /usr/local on
unix-type machines.
The SWISH::Filter method of filtering can also be used with the -S http method of indexing. By default
the F<swishspider> program (the Perl helper script that fetches documents from the web) will attempt to
use the SWISH::Filter module if it can be found in Perls library path. This path is set automatically for
spider.pl but not for swishspider (because it would slow down a method that's already slow and spider.pl is
recommended over the -S http method).
Therefore, all that's required to use this system with -S http is setting
the @INC array to point to the filter directory.
For example, if the swish-e distribution was unpacked into ~/swish-e:
PERL5LIB=~/swish-e/filters swish-e -c conf -S http
will allow the -S http method to make use of the SWISH::Filter module.
Note that if you are not using the SWISH::Filter module you may wish to edit the F<swishspider> program
and disable the use of the SWISH::Filter module using this setting:
use constant USE_FILTERS => 0; # disable SWISH::Filter
This prevents the program from attempting to use the SWISH::Filter module for every non-text
URL that is fetched. Of course, if you are concerned with indexing speed you should be using
the -S prog method with spider.pl instead of -S http.
If you are not spidering, but you still want to make use of the SWISH::Filter module for
filtering you can use the DirTree.pl program (in $prefix/lib/swish-e). This is a simple
program that traverses the file system and uses SWISH::Filter for filtering.
Here's two examples of how to run a filter program, one using Swish-e's
C<FileFilter> directive, another using a C<prog> input method program.
See the F<SwishSpiderConfig.pl> file for an example of using the SWISH::Filter
module.
These filters simply use the program C</bin/cat> as a filter and only
indexes .html files.
First, using the C<FileFilter> method, here's the entire configuration
file (swish.conf):
IndexDir .
IndexOnly .html
FileFilter .html "/bin/cat" "'%p'"
and index with the command
swish-e -c swish.conf -v 1
Now, the same thing with using the C<-S prog> document source input method
and a Perl program called catfilter.pl. You can see that's it's much
more work than using the C<FileFilter> method above, but provides a
place to do additional processing. In this example, the C<prog> method
is only slightly faster. But if you needed a perl script to run as a
FileFilter then C<prog> will be significantly faster.
#!/usr/local/bin/perl -w
use strict;
use File::Find; # for recursing a directory tree
$/ = undef;
find(
{ wanted => \&wanted, no_chdir => 1, },
'.',
);
sub wanted {
return if -d;
return unless /\.html$/;
my $mtime = (stat)[9];
my $child = open( FH, '-|' );
die "Failed to fork $!" unless defined $child;
exec '/bin/cat', $_ unless $child;
my $content = <FH>;
my $size = length $content;
print <<EOF;
Content-Length: $size
Last-Mtime: $mtime
Path-Name: $_
EOF
print <FH>;
}
And index with the command:
swish-e -S prog -i ./catfilter.pl -v 1
This example will probably not work under Windows due to the '-|' open.
A simple piped open may work just as well:
That is, replace:
my $child = open( FH, '-|' );
die "Failed to fork $!" unless defined $child;
exec '/bin/cat', $_ unless $child;
with this:
open( FH, "/bin/cat $_ |" ) or die $!;
Perl will try to avoid running the command through the shell if meta
characters are not passed to the open. See C<perldoc -f open> for
more information.
=head3 Eh, but I just want to know how to index PDF documents!
See the examples in the F<conf> directory and the comments in
the F<SwishSpiderConfig.pl> file.
See the previous question for the details on filtering. The method you decide to use
will depend on how fast you want to index, and your comfort level with using Perl modules.
Regardless of the filtering method you use you will need to install the Xpdf packages
available from http://www.foolabs.com/xpdf/.
=head3 I'm using Windows and can't get Filters or the prog input method
to work!
Both the C<-S prog> input method and filters use the C<popen()> system
call to run the external program. If your external program is, for
example, a perl script, you have to tell Swish-e to run perl, instead of
the script. Swish-e will convert forward slashes to backslashes
when running under Windows.
For example, you would need to specify the path to perl as (assuming
this is where perl is on your system):
IndexDir e:/perl/bin/perl.exe
Or run a filter like:
FileFilter .foo e:/perl/bin/perl.exe 'myscript.pl "%p"'
It's often easier to just install Linux.
=head3 How do I index non-English words?
Swish-e indexes 8-bit characters only. This is the ISO 8859-1 Latin-1
character set, and includes many non-English letters (and symbols).
As long as they are listed in C<WordCharacters> they will be indexed.
Actually, you probably can index any 8-bit character set, as long as
you don't mix character sets in the same index and don't use libxml2 for
parsing (see below).
The C<TranslateCharacters> directive (L<SWISH-CONFIG|SWISH-CONFIG>)
can translate characters while indexing and searching. You may
specify the mapping of one character to another character with the
C<TranslateCharacters> directive.
C<TranslateCharacters :ascii7:> is a predefined set of characters that
will translate eight-bit characters to ascii7 characters. Using the
C<:ascii7:> rule will, for example, translate "" to "aac". This means:
searching "elik", "elik" or "celik" will all match the same word.
Note: When using libxml2 for parsing, parsed documents are converted
internally (within libxml2) to UTF-8. This is converted to ISO 8859-1
Latin-1 when indexing. In cases where a string can not be converted
from UTF-8 to ISO 8859-1 (because it contains non 8859-1 characters),
the string will be sent to Swish-e in UTF-8 encoding. This will results
in some words indexed incorrectly. Setting C<ParserWarningLevel> to 1
or more will display warnings when UTF-8 to 8859-1 conversion fails.
=head3 Can I add/remove files from an index?
Try building swish-e with the C<--enable-incremental> option.
The rest of this FAQ applies to the default swish-e format.
Swish-e currently has no way to add or remove items from
its index. But, Swish-e indexes so quickly that it's often possible to
reindex the entire document set when a file needs to be added, modified or removed.
If you are spidering a remote site then consider caching documents locally compressed.
Incremental additions can be handled in a couple of ways, depending on
your situation. It's probably easiest to create one main index every
night (or every week), and then create an index of just the new files
between main indexing jobs and use the C<-f> option to pass both indexes
to Swish-e while searching.
You can merge the indexes into one index (instead of using -f), but it's
not clear that this has any advantage over searching multiple indexes.
How does one create the incremental index?
One method is by using the C<-N> switch to pass a file path to
Swish-e when indexing. It will only index files that have a last
modification date C<newer> than the file supplied with the C<-N> switch.
This option has the disadvantage that Swish-e must process every file in
every directory as if they were going to be indexed (the test for C<-N>
is done last right before indexing of the file contents begin and after
all other tests on the file have been completed) -- all that just to
find a few new files.
Also, if you use the Swish-e index file as the file passed to C<-N> there
may be files that were added after indexing was started, but before the
index file was written. This could result in a file not being added to
the index.
Another option is to maintain a parallel directory tree that contains
symlinks pointing to the main files. When a new file is added (or
changed) to the main directory tree you create a symlink to the real file
in the parallel directory tree. Then just index the symlink directory
to generate the incremental index.
This option has the disadvantage that you need to have a central
program that creates the new files that can also create the symlinks.
But, indexing is quite fast since Swish-e only has to look at the files
that need to be indexed. When you run full indexing you simply unlink
(delete) all the symlinks.
Both of these methods have issues where files could end up in both
indexes, or files being left out of an index. Use of file locks while
indexing, and hash lookups during searches can help prevent these
problems.
=head3 I run out of memory trying to index my files.
It's true that indexing can take up a lot of memory! Swish-e is extremely
fast at indexing, but that comes at the cost of memory.
The best answer is install more memory.
Another option is use the C<-e> switch. This will require less memory,
but indexing will take longer as not all data will be stored in memory
while indexing. How much less memory and how much more time depends on
the documents you are indexing, and the hardware that you are using.
Here's an example of indexing all .html files in /usr/doc on Linux.
This first example is I<without> C<-e> and used about 84M of memory:
270279 unique words indexed.
23841 files indexed. 177640166 total bytes.
Elapsed time: 00:04:45 CPU time: 00:03:19
This is I<with> C<-e>, and used about 26M or memory:
270279 unique words indexed.
23841 files indexed. 177640166 total bytes.
Elapsed time: 00:06:43 CPU time: 00:04:12
You can also build a number of smaller indexes and then merge together
with C<-M>. Using C<-e> while merging will save memory.
Finally, if you do build a number of smaller indexes, you can specify more
than one index when searching by using the C<-f> switch. Sorting large
results sets by a property will be slower when specifying multiple index
files while searching.
=head3 "too many open files" when indexing with -e option
Some platforms report "too many open files" when using the -e economy option.
The -e feature uses many temporary files (something like 377) plus
the index files
and this may exceed your system's limits.
Depending on your platform you may need to set "ulimit" or "unlimit".
For example, under Linux bash shell:
$ ulimit -n 1024
Or under an old Sparc
% unlimit openfiles
=head3 My system admin says Swish-e uses too much of the CPU!
That's a good thing! That expensive CPU is supposed to be busy.
Indexing takes a lot of work -- to make indexing fast much of the work is
done in memory which reduces the amount of time Swish-e is waiting on I/O.
But, there's two things you can try:
The C<-e> option will run Swish-e in economy mode, which uses the disk
to store data while indexing. This makes Swish-e run somewhat slower,
but also uses less memory. Since it is writing to disk more often it
will be spending more time waiting on I/O and less time in CPU. Maybe.
The other thing is to simply lower the priority of the job using the
nice(1) command:
/bin/nice -15 swish-e -c search.conf
If concerned about searching time, make sure you are using the -b and -m
switches to only return a page at a time. If you know that your result
sets will be large, and that you wish to return results one page at a
time, and that often times many pages of the same query will be requested,
you may be smart to request all the documents on the first request, and
then cache the results to a temporary file. The perl module File::Cache
makes this very simple to accomplish.
=head2 Spidering
=head3 How can I index documents on a web server?
If possible, use the file system method C<-S fs> of indexing to index
documents in you web area of the file system. This avoids the overhead
of spidering a web server and is much faster. (C<-S fs> is the default
method if C<-S> is not specified).
If this is impossible (the web server is not local, or documents are dynamically
generated), Swish-e provides two methods of spidering. First, it includes the http method
of indexing C<-S http>. A number of special configuration directives are available that
control spidering (see L<SWISH-CONFIG/"Directives for the HTTP Access Method Only">). A perl helper
script (swishspider) is included in the F<src> directory to assist with spidering web
servers. There are example configurations for spidering in the F<conf> directory.
As of Swish-e 2.2, there's a general purpose "prog" document source where
a program can feed documents to it for indexing. A number of example
programs can be found in the C<prog-bin> directory, including a program
to spider web servers. The provided spider.pl program is full-featured
and is easily customized.
The advantage of the "prog" document source feature over the "http" method
is that the program is only executed one time, where the swishspider.pl
program used in the "http" method is executed once for every document
read from the web server. The forking of Swish-e and compiling of the
perl script can be quite expensive, time-wise.
The other advantage of the C<spider.pl> program is that it's simple and
efficient to add filtering (such as for PDF or MS Word docs) right into
the spider.pl's configuration, and it includes features such as MD5 checks
to prevent duplicate indexing, options to avoid spidering some files,
or index but avoid spidering. And since it's a perl program there's no
limit on the features you can add.
=head3 Why does swish report "./swishspider: not found"?
Does the file F<swishspider> exist where the error message displays? If not, either
set the configuration option L<SpiderDirectory|SWISH-CONFIG/"item_SpiderDir">
to point to the directory where the F<swishspider> program is found, or place the
F<swishspider> program in the current directory when running swish-e.
If you are running Windows, make sure "perl" is in your path. Try typing F<perl> from
a command prompt.
If you not running windows, make sure that the shebang line (the first line of the
swishspider program that starts with #!) points to the correct location of perl.
Typically this will be F</usr/bin/perl> or F</usr/local/bin/perl>. Also, make sure that
you have execute and read permissions on F<swishspider>.
The F<swishspider> perl script is only used with the -S http method of indexing.
=head3 I'm using the spider.pl program to spider my web site, but some
large files are not indexed.
The C<spider.pl> program has a default limit of 5MB file size. This can
be changed with the C<max_size> parameter setting. See C<perldoc
spider.pl> for more information.
=head3 I still don't think all my web pages are being indexed.
The F<spider.pl> program has a number of debugging switches and can be
quite verbose in telling you what's happening, and why. See C<perldoc
spider.pl> for instructions.
=head3 Swish is not spidering Javascript links!
Swish cannot follow links generated by Javascript, as they are generated
by the browser and are not part of the document.
=head3 How do I spider other websites and combine it with my own
(filesystem) index?
You can either merge C<-M> two indexes into a single index, or use C<-f>
to specify more than one index while searching.
You will have better results with the C<-f> method.
=head2 Searching
=head3 How do I limit searches to just parts of the index?
If you can identify "parts" of your index by the path name you have
two options.
The first options is by indexing the document path. Add this to your
configuration:
MetaNames swishdocpath
Now you can search for words or phrases in the path name:
swish-e -w 'foo AND swishdocpath=(sales)'
So that will only find documents with the word "foo" and where the file's
path contains "sales". That might not works as well as you like, though,
as both of these paths will match:
/web/sales/products/index.html
/web/accounting/private/sales_we_messed_up.html
This can be solved by searching with a phrase (assuming "/" is not
a WordCharacter):
swish-e -w 'foo AND swishdocpath=("/web/sales/")'
swish-e -w 'foo AND swishdocpath=("web sales")' (same thing)
The second option is a bit more powerful. With the C<ExtractPath>
directive you can use a regular expression to extract out a sub-set of
the path and save it as a separate meta name:
MetaNames department
ExtractPath department regex !^/web/([^/]+).+$!$1/
Which says match a path that starts with "/web/" and extract out
everything after that up to, but not including the next "/" and save it in
variable $1, and then match everything from the "/" onward. Then replace
the entire matches string with $1. And that gets indexed as meta name
"department".
Now you can search like:
swish-e -w 'foo AND department=sales'
and be sure that you will only match the documents in the /www/sales/*
path. Note that you can map completely different areas of your file
system to the same metaname:
# flag the marketing specific pages
ExtractPath department regex !^/web/(marketing|sales)/.+$!marketing/
ExtractPath department regex !^/internal/marketing/.+$!marketing/
# flag the technical departments pages
ExtractPath department regex !^/web/(tech|bugs)/.+$!tech/
Finally, if you have something more complicated, use C<-S prog> and
write a perl program or use a filter to set a meta tag when processing
each file.
=head3 How is ranking calculated?
The C<swishrank> property value is calculated based on which Ranking Scheme (or algorithm)
you have selected. In this discussion, any time the word B<fancy> is used, you should
consult the actual code for more details. It is open source, after all.
Things you can do to affect ranking:
=over
=item MetaRankBias
You may configure your index to bias certain metaname values more or less than others.
See the C<MetaRankBias> configuration option in L<SWISH-CONFIG>.
=item IgnoreTotalWordCountWhenRanking
Set to 1 (default) or 0 in your config file. See L<SWISH-CONFIG>.
B<NOTE:> You must set this to 0 to use the IDF Ranking Scheme.
=item structure
Each term's position in each HTML document is given a structure value based on the context
in which the word appears. The structure value is used to artificially inflate
the frequency of each term in that particular document.
These structural values are defined in F<config.h>:
#define RANK_TITLE 7
#define RANK_HEADER 5
#define RANK_META 3
#define RANK_COMMENTS 1
#define RANK_EMPHASIZED 0
For example, if the word C<foo> appears in the title of a document, the Scheme
will treat that document as if C<foo> appeared 7 additional times.
=back
All Schemes share the following characteristics:
=over
=item AND searches
The rank value is averaged for all AND'd terms. Terms within a set of parentheses () are
averaged as a single term (this is an acknowledged weakness and is on the TODO list).
=item OR searches
The rank value is summed and then doubled for each pair of OR'd terms. This results
in higher ranks for documents that have multiple OR'd terms.
=item scaled rank
After a document's raw rank score is calculated, a final rank score is calculated using a
fancy C<log()> function. All the documents are then scaled against a base score of 1000.
The top-ranked document will therefore always have a C<swishrank> value of 1000.
=back
Here is a brief overview of how the different Schemes work. The number in parentheses after
the name is the value to invoke that scheme with C<swish-e -R> or C<RankScheme()>.
=over
=item Default (0)
The default ranking scheme considers the number of times a term appears in a
document (frequency), the MetaRankBias and the structure value. The rank might be summarized
as:
DocRank = Sum of ( structure + metabias )
Consider this output with the DEBUG_RANK variable set at compile time:
Ranking Scheme: 0
Word entry 0 at position 6 has struct 7
Word entry 1 at position 64 has struct 41
Word entry 2 at position 71 has struct 9
Word entry 3 at position 132 has struct 9
Word entry 4 at position 154 has struct 9
Word entry 5 at position 423 has struct 73
Word entry 6 at position 541 has struct 73
Word entry 7 at position 662 has struct 73
File num: 1104. Raw Rank: 21. Frequency: 8 scaled rank: 30445
Structure tally:
struct 0x7 = count of 1 ( HEAD TITLE FILE ) x rank map of 8 = 8
struct 0x9 = count of 3 ( BODY FILE ) x rank map of 1 = 3
struct 0x29 = count of 1 ( HEADING BODY FILE ) x rank map of 6 = 6
struct 0x49 = count of 3 ( EM BODY FILE ) x rank map of 1 = 3
Every word instance starts with a base score of 1.
Then for each instance of your word, a running
sum is taken of the structural value of that word position plus any bias you've configured.
In the example above, the raw rank is C<1 + 8 + 3 + 6 + 3 = 21>.
Consider this line:
struct 0x7 = count of 1 ( HEAD TITLE FILE ) x rank map of 8 = 8
That means there was one instance of our word in the title of the file.
It's context was in the <head> tagset, inside the <title>.
The <title> is the most specific structure, so it gets the
RANK_TITLE score: 7. The base rank of 1 plus the structure score of 7 equals 8. If there
had been two instances of this word in the title, then the score would have been C<8 + 8 = 16>.
=item IDF (1)
IDF is short for Inverse Document Frequency. That's fancy ranking lingo for taking into
account the total frequency of a term across the entire index, in addition to the term's
frequency in a single document. IDF ranking also uses the relative density of a word in a
document to judge its relevancy. Words that appear more often in a doc make that doc's rank
higher, and longer docs are not weighted higher than shorter docs.
The IDF Scheme might be summarized as:
DocRank = Sum of ( density * idf * ( structure + metabias ) )
Consider this output from DEBUG_RANK:
Ranking Scheme: 1
File num: 1104 Word Score: 1 Frequency: 8 Total files: 1451
Total word freq: 108 IDF: 2564
Total words: 1145877 Indexed words in this doc: 562
Average words: 789 Density: 1120 Word Weight: 28716
Word entry 0 at position 6 has struct 7
Word entry 1 at position 64 has struct 41
Word entry 2 at position 71 has struct 9
Word entry 3 at position 132 has struct 9
Word entry 4 at position 154 has struct 9
Word entry 5 at position 423 has struct 73
Word entry 6 at position 541 has struct 73
Word entry 7 at position 662 has struct 73
Rank after IDF weighting: 574321
scaled rank: 132609
Structure tally:
struct 0x7 = count of 1 ( HEAD TITLE FILE ) x rank map of 8 = 8
struct 0x9 = count of 3 ( BODY FILE ) x rank map of 1 = 3
struct 0x29 = count of 1 ( HEADING BODY FILE ) x rank map of 6 = 6
struct 0x49 = count of 3 ( EM BODY FILE ) x rank map of 1 = 3
It is similar to the default Scheme, but notice how the total number of files in the index
and the total word frequency (as opposed to the document frequency) are both part of the
equation.
=back
Ranking is a complicated subject. SWISH-E allows for more Ranking Schemes to be developed
and experimented with, using the -R option (from the swish-e command) and the RankScheme
(see the API documentation). Experiment and share your findings via the discussion list.
=head3 How can I limit searches to the title, body, or comment?
Use the C<-t> switch.
=head3 I can't limit searches to title/body/comment.
Or, I<I can't search with meta names, all the names are indexed as
"plain".>
Check in the config.h file if #define INDEXTAGS is set to 1. If it is,
change it to 0, recompile, and index again. When INDEXTAGS is 1, ALL
the tags are indexed as plain text, that is you index "title", "h1", and
so on, AND they loose their indexing meaning. If INDEXTAGS is set to 0,
you will still index meta tags and comments, unless you have indicated
otherwise in the user config file with the IndexComments directive.
Also, check for the C<UndefinedMetaTags> setting in your configuration
file.
=head3 I've tried running the included CGI script and I get a "Internal
Server Error"
Debugging CGI scripts are beyond the scope of this document.
Internal Server Error basically means "check the web server's log for
an error message", as it can mean a bad shebang (#!) line, a missing
perl module, FTP transfer error, or simply an error in the program.
The CGI script F<swish.cgi> in the F<example> directory contains some
debugging suggestions. Type C<perldoc swish.cgi> for information.
There are also many, many CGI FAQs available on the Internet. A quick web
search should offer help. As a last resort you might ask your webadmin
for help...
=head3 When I try to view the swish.cgi page I see the contents of the
Perl program.
Your web server is not configured to run the program as a CGI script.
This problem is described in C<perldoc swish.cgi>.
=head3 How do I make Swish-e highlight words in search results?
Short answer:
Use the supplied swish.cgi or search.cgi scripts located in the F<example> directory.
Long answer:
Swish-e can't because it doesn't have access to the source documents when
returning results, of course. But a front-end program of your creation
can highlight terms. Your program can open up the source documents and
then use regular expressions to replace search terms with highlighted
or bolded words.
But, that will fail with all but the most simple source documents.
For HTML documents, for example, you must parse the document into words
and tags (and comments). A word you wish to highlight may span multiple
HTML tags, or be a word in a URL and you wish to highlight the entire
link text.
Perl modules such as HTML::Parser and XML::Parser make word extraction
possible. Next, you need to consider that Swish-e uses settings such
as WordCharacters, BeginCharacters, EndCharacters, IgnoreFirstChar,
and IgnoreLast, char to define a "word". That is, you can't consider
that a string of characters with white space on each side is a word.
Then things like TranslateCharacters, and HTML Entities may transform a
source word into something else, as far as Swish-e is concerned. Finally,
searches can be limited by metanames, so you may need to limit your
highlighting to only parts of the source document. Throw phrase searches
and stopwords into the equation and you can see that it's not a trivial
problem to solve.
All hope is not lost, thought, as Swish-e does provide some help.
Using the C<-H> option it will return in the headers the current index
(or indexes) settings for WordCharacters (and others) required to parse
your source documents as it parses them during indexing, and will return a
"Parsed Words:" header that will show how it parsed the query internally.
If you use fuzzy indexing (word stemming, soundex, or metaphone)
then you will also need to stem each word in your
document before comparing with the "Parsed Words:" returned by Swish-e.
The Swish-e stemming code is available either by using the Swish-e
Perl module (SWISH::API) or the C library (included with the swish-e distribution),
or by using the SWISH::Stemmer module available on CPAN. Also on CPAN is
the module Text::DoubleMetaphone. Using SWISH::API probably provides the best
stemming support.
=head3 Do filters effect the performance during search?
No. Filters (FileFilter or via "prog" method) are only used for building
the search index database. During search requests there will be no
filter calls.
=head2 I have read the FAQ but I still have questions about using Swish-e.
The Swish-e discussion list is the place to go. http://swish-e.org/.
Please do not email developers directly. The list is the best place to
ask questions.
Before you post please read I<QUESTIONS AND TROUBLESHOOTING> located
in the L<INSTALL|INSTALL> page. You should also search the Swish-e
discussion list archive which can be found on the swish-e web site.
In short, be sure to include in the following when asking for help.
=over 4
=item * The swish-e version (./swish-e -V)
=item * What you are indexing (and perhaps a sample), and the number
of files
=item * Your Swish-e configuration file
=item * Any error messages that Swish-e is reporting
=back
=head1 Document Info
$Id: SWISH-FAQ.pod,v 1.36 2004/10/04 22:49:35 whmoseley Exp $
.
|