File: condor_transfer_data.rst

package info (click to toggle)
condor 23.9.6%2Bdfsg-2.1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 60,012 kB
  • sloc: cpp: 528,272; perl: 87,066; python: 42,650; ansic: 29,558; sh: 11,271; javascript: 3,479; ada: 2,319; java: 619; makefile: 615; xml: 613; awk: 268; yacc: 78; fortran: 54; csh: 24
file content (74 lines) | stat: -rw-r--r-- 2,382 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
*condor_transfer_data*
======================

transfer spooled data
:index:`condor_transfer_data<single: condor_transfer_data; HTCondor commands>`\ :index:`condor_transfer_data command`

Synopsis
--------

**condor_transfer_data** [**-help | -version**]

**condor_transfer_data** [
**-pool** *centralmanagerhostname[:portnumber]* |
**-name** *scheddname* ] | [**-addr** *"<a.b.c.d:port>"*]
*cluster... | cluster.process... | user...* |
**-constraint** *expression* ...

**condor_transfer_data** [
**-pool** *centralmanagerhostname[:portnumber]* |
**-name** *scheddname* ] | [**-addr** *"<a.b.c.d:port>"*] **-all**

Description
-----------

*condor_transfer_data* causes HTCondor to transfer spooled data. It is
meant to be used in conjunction with the **-spool** option of
*condor_submit*, as in

.. code-block:: console

    $ condor_submit -spool mysubmitfile

Submission of a job with the **-spool** option causes HTCondor to spool
all input files, the job event log, and any proxy across a connection to
the machine where the *condor_schedd* daemon is running. After spooling
these files, the machine from which the job is submitted may disconnect
from the network or modify its local copies of the spooled files.

When the job finishes, the job has :ad-attr:`JobStatus` = 4, meaning that the
job has completed. The output of the job is spooled, and
*condor_transfer_data* retrieves the output of the completed job.

Options
-------

**-help**
    Display usage information
**-version**
    Display version information
**-pool** *centralmanagerhostname[:portnumber]*
    Specify a pool by giving the central manager's host name and an
    optional port number
**-name** *scheddname*
    Send the command to a machine identified by *scheddname*
**-addr** *"<a.b.c.d:port>"*
    Send the command to a machine located at *"<a.b.c.d:port>"*
*cluster*
    Transfer spooled data belonging to the specified cluster
*cluster.process*
    Transfer spooled data belonging to a specific job in the cluster
*user*
    Transfer spooled data belonging to the specified user
**-constraint** *expression*
    Transfer spooled data for jobs which match the job ClassAd
    expression constraint
**-all**
    Transfer all spooled data

Exit Status
-----------

*condor_transfer_data* will exit with a status value of 0 (zero) upon
success, and it will exit with the value 1 (one) upon failure.