File: README

package info (click to toggle)
rel 1.3-3
  • links: PTS
  • area: non-free
  • in suites: hamm, potato, slink
  • size: 496 kB
  • ctags: 216
  • sloc: ansic: 1,868; sh: 254; makefile: 142
file content (469 lines) | stat: -rw-r--r-- 24,352 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469

------------------------------------------------------------------------------

THIS PROGRAM IS PROVIDED "AS IS". THE AUTHOR PROVIDES NO WARRANTIES
WHATSOEVER, EXPRESSED OR IMPLIED, INCLUDING WARRANTIES OF
MERCHANTABILITY, TITLE, OR FITNESS FOR ANY PARTICULAR PURPOSE.  THE
AUTHOR DOES NOT WARRANT THAT USE OF THIS PROGRAM DOES NOT INFRINGE THE
INTELLECTUAL PROPERTY RIGHTS OF ANY THIRD PARTY IN ANY COUNTRY.

Comments and/or bug reports should be addressed to:

    john@johncon.com (John Conover)

------------------------------------------------------------------------------

The objective of this application of the rel program, in conjunction
with the procmail/smartlist programs, is to construct an enterprise
wide, full text information retrieval system that uses the Unix MTA
(Message Transfer Agent,) as a delivery, query, and distribution
system-in a sense, what is currently termed "groupware."

In general, the operational procedure is as follows:

    1) The information retrieval database is installed in the
       database administrator's, (perhaps a project team leader, since
       installation does not require sysadm privileges,) account, as
       per the the instructions in the INSTALL file in this
       directory. Multiple information retrieval systems may be
       installed, if necessary.

    2) All members, perhaps a project team, who will have access to
       the information system will be entered into the accept and/or
       reject files, as appropriate, in the smartlist directory for
       the system's database as per the standard installation in the
       procmail/smartlist's manual. If the repository is, in addition,
       to function as a distribution agent of information, the dist
       file in the smartlist directory should include the email
       addresses of those on the distribution list, as per the
       standard installation outlined in the manual.

    3) As formal email correspondence occurs between the members of
       the project team, any mail that is mailed to the information
       system's account will be saved, and placed under maintenence of
       the full text database system. Anything pertaining to the
       project should be submitted to the database system. If the
       system functions as a distribution agent, all email submitted
       will also be mailed to the distribution list, which could be
       other team members, management, etc. These in turn, can be
       replied to, and the replies will be placed under the
       maintenance of the database system, and distributed to all
       others on the distribution list-ie., it is an asynchronous
       conferencing system, or an electronic "mailing list."

    4) However, it differs from conventional electronic mailing
       lists, in that it has a "sister account," which can receive
       mail, and can be used to query the database for previous email
       concerning issues that have been addressed, (or, perhaps not
       addressed.)  The program rel is used for the query, so a
       "context can be framed," using the rel search criteria.  Any
       email found that concerns the "framed context," is returned to
       the user via email, as a MIME compliant digest. The order of
       the emails in the digest are in order of relevance to the
       "context" of the query. In some sense, the digest is like an
       electronic administrative folder that is maintained with
       correspondences concerning specific issues. The major
       difference is that the folder can be created dynamically, on
       demand.

    5) It turns out, that with a modern MUI (Mail User Interface,)
       like the GNU emacs' rmail facility, that very powerful
       informatic operations can be performed. For example, the email
       digest returned from the operation in 4) above is in a MIME
       digest, in order of relevance to a "context" query. The digest
       can be "burst" into individual messages, and, starting with the
       most relevant message, the messages can be browsed to find a
       specific entry, or the context of group of messages. If a
       specific message is of interest, it can be replied to, to
       re-table an issue, or request additional information,
       etc.

       There is one other very powerful operation that can be
       performed. Suppose that when examining the messages in order of
       relevance, some interest is generated in one of the messages,
       and it is desired to find out more about the "context" of the
       message. The messages can be rearranged, and ordered by the
       author to further find out the the "context" of the messages
       from the author's point of view. The messages can also be
       rearranged by date, so that a short "context" window can be
       derived around the temporal issues of the message. These
       operations are termed "moving orthogonally in information
       space," eg., you were moving through the documents in order of
       relevance, then investigated the concerns of a specific author,
       then investigated the author's concerns in relation to the rest
       of the group over a period of time, etc.

Note that the process outlined in sections 3), 4), and 5), above,
really constitutes nothing more than an electronic literature search,
and is a similar process to what a historian would do when researching
a subject, (presumably because an understanding of the "context" of
the subject was desired.) The only difference is that the process is
highly automated. The proposed system can be thought of as an
electronic filling cabinet that can be searched, electronically. The
concept is not new-it was proposed by Vanavar Bush in the 1940's, (the
Memex Machine,) and later modified by Douglas Engelbart in the early
1970's.

Note that what is being proposed is a new administrative paradigm-one
that addresses context as opposed to traditional content issues in
organizations, and is compatible with the contemporary concepts of
"Empowerment," and "Total Quality Management." Administration is the
mechanization of the flow information through an organization. What is
being proposed is to use an "information machine," ie., computer, to
search, collate, and distribute information. In some sense, it is a
"memo machine" that can transcend organizational and parochial
boundaries. Note that the memos do not have to be structured-how
information is structured will be specified at query time, not the
time of composition of the memo. (Note that it is not a "hypertext"
system, since the "links" are constructed, dynamically, at the time of
the query-not at the time a document is composed.)

A key issue in automating the process is the capability for the
uninitiated to create "context" queries that are representative of the
information desired-the rel syntax is powerful, intuitive, and easy to
use on an operational basis; most managers already understand how to
use email, and the rel query syntax is similar to the one used in
algebraic calculators-in point of fact, it is identical to the syntax
used in calculators from Texas Instruments and Casio, except that
numbers are replaced by words, and mathematical operators are replaced
by boolean operators-a short hand natural language query.

As a concluding remark, note that the system can not be a substitute
for good organizational practices and disciplines-as Doug Engelbart
stated "if you automate a big mess, you end up with a very fast big
mess."

There are 3 attachments. The first is a brief introduction of how the
system was used in an experimental program, the second is excerpts
from the original system development reports, (the system outlined,
above, is far simpler than the original,) and the third is an excerpt
from the rel program manual.
______________________________________________________________________________

Attachment 1:
______________________________________________________________________________

Attached is a brief synopsis of an asynchronous conferencing system
(also known as an information retrieval system, electronic literature
search system, or corporate repository,) that I used in cross
functional program management, in another life, a long time ago.  The
objective was to find a methodology to relate the corporate
information repository to the management structure, (we did not
consider the technical issues to be significant.) The general concept
was to add sufficient functionality to the Unix email system to turn
it into an electronic literature search system.

The attached is a "cut and stick" from some of the reports on the
system's development. The project/program team supported by this
system consisted of little over a hundred professionals, from
approximately 20 specialties, and 4 core corporate functions. They
were geographically, and ethnically, disperse.

Also, attached is the description a program, rel.c, which was used to
perform complex literature search queries on the full text database,
and return documents, in order of relevance to the query, (note that
hypertext methodologies are incapable of operating in this fashion.)
These documents were returned as an email digest, which could be
"burst," into constituent documents, allowing the most relevant
document to be reviewed first, (and then, if necessary, the specific
email could be responded to, for further clarification, etc.) Since
most email readers, elm, pine, email, etc., are capable of sorting a
mail box by different criteria, one could move "orthogonally in
information space," during the review process, (ie., move though the
documents by relevance, and then sort by author, to find out what
he/she had to say about things, then sort by date, to find out the
chronology of events in the discussion, etc.) The database system was
a distributed environment, with each segment of the database
consisting of less than 10Mbyte, so queries were done in parallel,
using all network resources, and, thus, very fast.

In point of fact, all of the attachments are a "cut and stick" from
documents fetched from the full text database system with command "rel
((information & retrieval) | (literature & search) | (corporate &
repository) & management)"
______________________________________________________________________________

Attachment 2:
______________________________________________________________________________

          Various "cut and sticks" from the development reports.

Information systems are used in program management, which must
coordinate the various activities of the corporate functions (ie.,
engineering, marketing, sales, etc.) involved in development
projects. After researching the issues, (see below,) We concluded that
a distributed full text system that uses the mail (MTA) system as a
communication medium is the desirable direction to pursue. Our
reasoning is as follows:

        1) The Unix MTA is almost universal, and will operate
        effectively over uucp and/or ethernet connectivities in a
        non-homogeneous hardware environment.

        2) Each transaction is logged, with a date/time stamp, and who
        created the transaction.

        3) The MTA already has remedial file storage capabilities,
        which can be used to query/respond to transactions at a later
        date.

        4) Most(?) computers are already connected together, and users
        are familiar with how to use the system.

        5) The MTA database can be NFS'ed to conserve machine
        resources.

        6) It is a text based system.

We discounted the "hyper text" type of systems, because the links must be
established before the document is stored-which is fine if you know what
you are going to query for. In a general management application, this is
seldom the case. We set up a prototype system, using the following
(readily available) programs:

        1) elm, because it has a slightly more sophisticated file
        storage structure, and a very powerful aliasing capability
        that can alias team members as a group. Additionally, it has
        limited query capabilities, and can, through its forms
        capabilities, send mail transactions in a structured format.
        (Which is advantageous if the transactions are used for
        notification of schedule milestone completion, etc.) Eudora
        was used on the PC's and MAC's, using POP3 as the
        communications environment between the PC's and the Unix MTA.

        2) The dbm library to build an extensible hash query system
        into the file storage structure made by elm.  This was
        operated in two ways, by an RPC direct call, and a mail daemon
        that "read" incoming mail (to a query "account") and returned
        (via mail) all transactions that satisfied boolean
        conditionals on requested words. (A data dictionary was added
        later, so that the dictionary could be scanned for matches to
        regular expressions, which were then passed to the extensible
        hash system, but for some reason, this was seldom used.) The
        query was made through a very simple natural language
        interface, ie.,

                send john and c.*r not January

        would return all transactions containing john, excepting those
        written in January. (We did not attempt phrases, it looked
        complicated-this is ill advised by Tenopir, etc.  below.)
        This program contained approximately 350 lines of C code.  A
        soundex algorithm was added later to overcome spelling
        errors-the full text database contained the soundex of the
        words in a document, and any words searched for were converted
        to soundex prior to the query. (See the works by Knuth for
        details of the soundex algorithm.) Also a parser was added so
        that the boolean search words could be grouped in postfix
        expressions, eg., ((john & conover) ! (January | march)). The
        order that the documents were returned in is in order of
        relevance.

This prototype was well received, and was used as follows:

        1) Management "decreed" that the system would be used as a
        management tool, and all data had to be entered, or
        transcribed into the system (including the minutes of
        meetings, etc.) If it didn't exist in the system, it did not
        exist. All discussions, and reasons for decisions had to be
        placed in the system. ALL team members and upper management
        had identical access to ALL transactions. (Mail could be used
        for private correspondence, such as politicking, etc. but all
        decisions, and the reasons for the decisions had to be placed
        in the system.) The guiding rule was that at the end of the
        project, the system contained a complete play by play
        chronology and history of all decisions, and reasoning
        concerning the project, and, by the way, who was responsible
        for the decisions. On each Monday, everyone entered into the
        system, his/her objectives for the week, and when each
        objective was finished, she/he mailed the milestone into the
        system-ie., all group members and management could thus find
        out the exact status of the project at any time (ie., a
        "social contract" was made with management and the rest of the
        members of the team.) In some sense, it is really nothing more
        than an automated, real-time MBO system. At any time, a
        discussion could be initiated on problems/decisions in the
        system by anyone. The project manager was assigned the
        responsibility of "moderator," or chair person for his/her
        section of the project. Each Friday, the system was queried
        for project status, and the status plumbed to TeX for
        formating, and printed for official documentation. This
        document was discussed at a late Friday people-to-people staff
        meeting.  (The reason for setting things up this way can be
        found in Davido, below.)

        2) Marketing was responsible for acquiring all market data on
        magnetic media, (from services like Data Quest, the Department
        of Commerce, etc.) and each document was "mailed" into the
        system so that the information was available for retrieval by
        anyone.  All had access to the progress made by engineering,
        and can contribute information on issues as the program
        develops-ie., this was a "concurrent engineering" environment.

        3) Engineering was responsible for maintaining schedules, and
        reflecting those schedules in the system-if slippages occurred
        the situation could be addressed immediately by management,
        and a suitable cross functional resolution could be arrived
        at.

        4) Sales was responsible for adding customer inputs,
        concerning the project, into the system, so customer
        definitions could be retrieved by all project members. This
        included the customer data, such as who has buying authority
        in the customer's organization, who has signature, etc.

The results were very impressive not only by productivity standards, but
also by "correctness to fit and form" standards (ie., the right product
was in the market at the right time, the first time.) This has becoming a
central agenda, as outlined in Davido, below.

Bibliography:

"Computer-Supported Cooperative Work," Irene Greif
"A model for Distributed Campus Computing," George A. Champine
"Enterprise Networking," Ray Grenier and George Metes
"Connections," Lee Sproull and Sara Kiesler
"5th Generation Management," Charlse M. Savage
"Intellectual Teamwork," Jolene Galegher, Robert E. Krout and Carmen Egido
"In the Age of the Smart Machine," Shoshana Zuboff
"The Virtual Corporation," William H. Davido and Michael S. Malone
"Accelerating Innovation," Marvin L. Patterson
"Paradigm Shift," Don Tapscott and Art Caston
"Developing Products in Half the Time," Preston G. Smith and Donald G. Reinertsen
"Full Text Databases," Carol Tenopir and Jung Soon Ro
"Text and Context," Susan Jones
"From Memex to Hypertext," James M. Nyce and Paul Kahn
"The Corporation of the 1990's," Michael S. Scott Morton
"Computer Augmented Teamwork," Robert P. Bostrom, Richard T. Watson, Susan T. Kinney
"Engineering Information Management Systems," John Stark
"CE Concurrent Engineering," Donald E. Carter and Barbara Stilwell Baker
"Information Retrieval," William B. Brakes and Ricardo Baeza-Yates
"Text Information Retrieval Systems," Charles T. Meadow
"Leading Self-Directed Work Teams," Kimball Fisher

______________________________________________________________________________

Attachment 3:
______________________________________________________________________________

           From the inline documentation to the program rel.c.

Rel is a program that determines the relevance of text documents to a
set of keywords expressed in boolean infix notation. The list of file
names that are relevant are printed to the standard output, in order
of relevance.

For example, the command:

    rel "(directory & listing)" /usr/share/man/cat1

(ie., find the relevance of all files that contain both of the words
"directory" and "listing" in the catman directory) will list 21 files,
out of the 782 catman files, of which "ls.1" is the fifth most
relevant-meaning that to find the command that lists directories in a
Unix system, the "literature search" was cut from 359 to 5 files, or a
reduction of approximately 98%. The command took 1 minute and 26
seconds to execute on a on a System V, rel. 4.2 machine, (20Mhz 386
with an 18ms. ESDI drive,) which is a considerable expediency in
relation to browsing through the files in the directory since ls.1 is
the 359'th file in the directory. Although this example is remedial, a
similar expediency can be demonstrated in searching for documents in
email repositories and text archives.

General description of the program:

This program is an experiment to evaluate using infix boolean
operations as a heuristic to determine the relevance of text files in
electronic literature searches. The operators supported are, "&" for
logical "and," "|" for logical "or," and "!" for logical "not."
Parenthesis are used as grouping operators, and "partial key" searches
are fully supported, (meaning that the words can be abbreviated.) For
example, the command:

    rel "(((these & those) | (them & us)) ! we)" file1 file2 ...

would print a list of filenames that contain either the words "these"
and "those", or "them" and "us", but doesn't contain the word "we"
>from the list of filenames, file1, file2, ... The order of the printed
file names is in order of relevance, where relevance is determined by
the number of incidences of the words "these", "those", "them", and
"us", in each file. The general concept is to "narrow down" the number
of files to be browsed when doing electronic literature searches for
specific words and phrases in a group of files using a command similar
to:

    more `rel "(((these & those) | (them & us)) ! we)" file1 file2`

Although regular expressions were supported in the prototype versions
of the program, the capability was removed in the release versions for
reasons of syntactical formality, for example, the command:

    rel "((john & conover) & (joh.*over))" files

has a logical contradiction since the first group specifies all files
which contain "john" anyplace and "conover" anyplace in files, and the
second grouping specifies all files that contain "john" followed by
"conover". If the last group of operators takes precedence, the first
is redundant. Additionally, it is not clear whether wild card
expressions should span the scope multiple records in a literature
search, (which the first group of operators in this example does,) or
exactly what a wild card expression that spans multiple records means,
ie., how many records are to be spanned, without writing a string of
EOL's in the infix expression. Since the two groups of operators in
this example are very close, operationally, (at least for practical
purposes,) it was decided that support of regular expressions should
be abandoned, and such operations left to the grep(1) suite.

Applicability:

Applicability of rel varies on complexity of search, size of database,
speed of host environment, etc., however, as some general guidelines:

    1) For text files with a total size of less than 5 MB, rel, and
    standard egrep(1) queries of the text files will probably prove
    adequate.

    2) For text files with a total size of 5 MB to 50 MB, qt seems
    adequate for most queries. The significant issue is that, although
    the retrieval execution times are probably adequate with qt, the
    database write times are not impressive. Qt is listed in "Related
    information retrieval software:," below.

    3) For text files with a total size that is larger than 50 MB, or
    where concurrency is an issue, it would be appropriate to consider
    one of the other alternatives listed in "Related information
    retrieval software:," below.

References:

    1) "Information Retrieval, Data Structures & Algorithms," William
    B. Frakes, Ricardo Baeza-Yates, Editors, Prentice Hall, Englewood
    Cliffs, New Jersey 07632, 1992, ISBN 0-13-463837-9.

    The sources for the many of the algorithms presented in 1) are
    available by ftp, ftp.vt.edu:/pub/reuse/ircode.tar.Z

    2) "Text Information Retrieval Systems," Charles T. Meadow,
    Academic Press, Inc, San Diego, 1992, ISBN 0-12-487410-X.

    3) "Full Text Databases," Carol Tenopir, Jung Soon Ro, Greenwood
    Press, New York, 1990, ISBN 0-313-26303-5.

    4) "Text and Context, Document Processing and Storage," Susan
    Jones, Springer-Verlag, New York, 1991, ISBN 0-387-19604-8.

    5) ftp think.com:/wais/wais-corporate-paper.text

    6) ftp cs.toronto.edu:/pub/lq-text.README.1.10

Related information retrieval software:

    1) Wais, available by ftp, think.com:/wais/wais-8-b5.1.tar.Z.

    2) Lq-text, available by ftp,
    cs.toronto.edu:/pub/lq-text1.10.tar.Z.

    3) Qt, available by ftp,
    ftp.uu.net:/usenet/comp.sources/unix/volume27.

______________________________________________________________________________