* chmod TMPDIR/native, as otherwise set umask prevent it from being readable by the client
I'm wondering if it doesn't make sense to leave the daemon out of the game and let the
client package the environment - it could either cache it in /tmp and leave a note in
~/.icecream or it packages it right into ~/.icecream and use e.g. the inode numbers of
the compilers to verify it's the right environment - think NFS users changing machines.
This would simplify some things and avoid the above bugs (and it would make it more
convenient for users of /opt/gcc-cvs/bin too)
This would also help users who are running in a chroot enviornment where the local
daemon cannot even find the path to the compiler
* Improve Documentation (cschum)
* make the protocol version an uint32, not a hand-build array.
* let the client specify it was compiled with recent compiler (supporting -param). If so,
let the daemon compile with the options, otherwise leave them out.
Akademy must have's:
* daemon: only accept connections from a sane net prefix (scheduler can determine
this and send it out via confMsg) to avoid being exposed to the internet
* benchmark/testing jobs (broken machines with broken ram and broken network card on the
icecream network destroy the fun and are not easy to find manually)
* Option -iiface to specify the interface[s] to listen on for the scheduler, or
to use for the daemon.
* Don't explicitly forbid tunnels because it could be useful for things like
* If someone calls a amd64 client on a host that runs a ia32 daemon and there are
no other amd64 daemons in the farm, he will get no answer, but a timeout from
scheduler (quite a corner case, but neat)
* use syslog
* Log problems found at some scheduler log - or even in the monitor. E.g. if a client
can't reach a given daemon, it should be able to tell. Perhaps the scheduler can even
disable that very host for some penalty time
* Reduce amount of force-waits, especially if they involve network latency (scheduler queries)
and daemon context switches:
- remove the need for EndMsg
- do not ask the daemon about the first job (WIP dirk)
* if a compile job SIGSEGV's or SIGABORTs, make sure to recompile locally because it could
be just a glibc/kernel incompatibility on the remote site
* Add heuristic which optimises for overhead-reduction. Statistics prove ( ;) ), that for
e.g. linux kernel the file size varies a lot, and small jobs should be preferably compiled
locally and bigger ones preferably remote.
* Split number of jobs into number of compile jobs (cheap) and number of non compile jobs (can
be expensive, e.g. ld or meinproc). The reason is that multicore chips become more and more
common. Today you can get quad cores easily and in some month we've 8 cores and several link
jobs at the same time can pull down your whole system.
(I can life with a hard coded number of one non compile job but others probably prefer a
* Consider launching a scheduler on-demand if there is none available or if a daemon knows
it has a better version than the scheduler that is available (https://github.com/icecc/icecream/issues/84).
Suggestions from "Wilson Snyder" sicortex.com:
- Add ICECREAM_RUN_ICECCD to make the scheduler machine not run iceccd
Christoph Thielecke writes:
> 1) Problem von icecream: icecc hat /usr/bin/gcc-4.2 kopiert (ist aber ein
> Symlink auf den Hardening-Wrapper, der installiert war, siehe Listing
> unten). Kannst Du vieleicht eine Prüfung einbauen, ob gcc-x.y mit .real
> endet bzw der Wrapper ist?
Andi Kleen writes to protect against misconfiguration:
>generate somefile.h on the client with
>#if __GNUC__ != expected_major || __GNU_MINOR__ != expected_minor
>and then run with
> (this file needs to be in the environment created by create_env)
From thedrow (distcc maintainer)
>Merging this back might just mean feature parity with what icecream currently
>provides or merging some of the code directly to distcc.
>I think that the entire Linux community will gain from one tool that is able to
>distribute compilation of large C/C++ components that provides more flexibility
>and allows you to grow from the easy to setup distcc type of deployment to the 100
>machines build farm that icecc provides.
>Are there more features that diverged from distcc except for the server/client
We think the the only other feature is better support for automatic chroot.