File: condor_remote_cluster.rst

package info (click to toggle)
condor 23.9.6%2Bdfsg-2.1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 60,012 kB
  • sloc: cpp: 528,272; perl: 87,066; python: 42,650; ansic: 29,558; sh: 11,271; javascript: 3,479; ada: 2,319; java: 619; makefile: 615; xml: 613; awk: 268; yacc: 78; fortran: 54; csh: 24
file content (52 lines) | stat: -rw-r--r-- 1,700 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
*condor_remote_cluster*
=======================

Manage and configure the clusters to be accessed.
:index:`condor_remote_cluster<single: condor_remote_cluster; HTCondor commands>`
:index:`condor_remote_cluster command`

Synopsis
--------

**condor_remote_cluster** [-**h** || --**help**]

**condor_remote_cluster** [-**l** || --**list**] [-**a** || --**add <host>
[schedd]**] [-**r** || --**remove <host>**] [-**s** || --**status
<host>**] [-**t** || --**test <host>**]

Description
-----------

*condor_remote_cluster* is part of a feature for accessing high
throughput computing resources from a local desktop using only an SSH
connection.

*condor_remote_cluster* enables management and configuration of the
access point of the remote computing resource.
After initial setup, jobs can be submitted to the local job queue,
which are then forwarded to the remote system.

A **<host>** is of the form ``[user@]fqdn.example.com[:22]``.

Options
-------

 **-help**
    Print usage information and exit.
 **-list**
    List all installed clusters.
 **-remove** *<host>*
    Remove an already installed cluster, where the cluster is identified
    by *<host>*.
 **-add** *<host> [scheduler]*
    Install and add a cluster defined by *<host>*. The optional
    *scheduler* specifies the scheduler on the cluster. Valid values are
    ``pbs``, ``lsf``, ``condor``, ``sge`` or ``slurm``. If not given,
    the default will be ``pbs``.
 **-status** *<host>*
    Query and print the status of an already installed cluster, where
    the cluster is identified by *<host>*.
 **-test** *<host>*
    Attempt to submit a test job to an already installed cluster, where
    the cluster is identified by *<host>*.