File: OBJECTIVES

package info (click to toggle)
task-spooler 1.0.3%2Bdfsg1-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 364 kB
  • sloc: ansic: 4,291; sh: 149; makefile: 105
file content (95 lines) | stat: -rw-r--r-- 3,276 bytes parent folder | download | duplicates (5)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
Introduction
---------------------------
This file was written before even the first implementation of 'ts',
and it's kept in the repository for historical reasons.

Tool objectives
---------------------------
- Be able to send commands to the computer, so that they run one after another,
  stored in a queue.
- Control what to do with the input/output of those programs

Limits
---------------------------
- One queue system per user per system

Interface
---------------------------
The user should see a single command: ts
The default action (maybe on "ts - ...") should be appending a job to the
default queue.

Job execution
---------------------------
The jobs should receive the proper environment from the shell/parent they were
queued on.
The errorlevels, and maybe some statistical information (times, bytes in/out)
should be stored in the queue. The user should be able to check the result of
the jobs at any time.
We should fork a '$SHELL' and run the job command, because a user may expect his
shell parser.

Queues
---------------------------
There should be different queues, each with an id. The command will work by
default with one quue.
Each queue will have jobs in different states:
- running
- queued
- blocked (wrong errorlevel on the previous job? optional)
- finished (the user can see the errorlevel,etc.)

Input/output
---------------------------
The user, when adding a job to the queue, should be able to choose:
- Where the output goes. As flags:
  - store: put it into a file in /tmp (or similar).
  - mail: put it into a file in /tmp, and send it by mail (maybe gzipped)
  - gzip: directly gzip the output
  - tail: store in a buffer the last 10 lines
  A user may choose "mail || gzip", and the file should be mailed gzipped. Or
  "gzip || tail", and the file should be stored directly gzipped, although with
  tailing available.
- What to do with the input: opened/closed
  If opened, the user should be able to connect its current stdin to the
  process'.

Job management
---------------------------
- Shutdown: stop all the background processes related to the queues.
- tail (id): 'tail' the last lines of a process' output.
- list: list the queues, with relevant information
- wait (id)*: block until some processes dies
- remove (id)+: remove jobs from the queue
- clear: remove the information about dead jobs (errorlevels, etc.)


----------------------------------
Implementation
------------------------------------

Client/Server
----------------------
A central server daemon will have the main list of jobs, will open a socket,
and all the clients will tell what to do through it.
This daemon will be started , if a client cannot find it.
The client will go immediatly to background when the user adds a new job.
A minimum version control should be done between client/server, to assure the
dialog will be correct.

Connections
-----------------------
The connections could go through a Unix socket in /tmp, as X windows.
An env variable could have a preference for a socket other than the default.
Let's say, /tmp/ts-$USER-0.socket

Protocol
-----------------------
The clients should pass messages to the server, and vicerveza.
Some messages should allow:
- new job
- list jobs
- input data
- tail data
- get job info
- wait job